Search Results

Search found 26004 results on 1041 pages for 'debian based'.

Page 351/1041 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to configure installed Ruby and gems?

    - by NARKOZ
    Hi. My current gem env returns: RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] - INSTALLATION DIRECTORY: /home/USERNAME/.gems - RUBYGEMS PREFIX: /home/narkoz - RUBY EXECUTABLE: /usr/bin/ruby1.8 - EXECUTABLE DIRECTORY: /home/USERNAME/.gems/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/USERNAME/.gems - /usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gempath" => ["/home/USERNAME/.gems", "/usr/lib/ruby/gems/1.8"] - "gemhome" => "/home/USERNAME/.gems" - REMOTE SOURCES: - http://rubygems.org/ How can I change path /home/USERNAME/ to my own without uninstalling? OS: Debian Linux

    Read the article

  • How to properly start gvfs without gnome?

    - by 9000
    I have a Debian testing box with Xfce (no Gnome, no Nautilus). It has all gvfs-related stuff installed, including all backends and fuse interface. But any attempts to gvfs-mount anything (like sftp://... or smb://...) fail with error opening file: Operation not supported, and gigolo shows only 'unix device (file)' in the list of supported protocols. My ~/.gvfs has rwx permissions, and I'm a member of fuse group; other fuse-related stuff works for me. What do I do? Where to look?

    Read the article

  • Homedir inside homedir restricted access

    - by blid
    On my VPS i've installed debian, apache+php. I have 2 users: foo and bar. Apache is configured to execute php files from /home/foo/htdocs. I created dir: /home/foo/htdocs/bar/ and made it home dir for user bar. Hover, I need to make a restriction: bar can't read, write or executre any files outside his own dir, but Apache has to be able execute all php files from /htdocs. I tried to chown the bar dir only for user bar, also experimented a lot with chmod but without a result so far. If there's any better way to satisfy my needs don't hesitate to write about it. Thanks in advance

    Read the article

  • how to properly edit hosts, hostname and resolf.conf?

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • MySQL simple replication problem: 'show master status' produces 'Empty set'?

    - by simon
    I've been setting up MySQL master replication (on Debian 6.0.1) following these instructions faithfully: http://www.neocodesoftware.com/replication/ I've got as far as: mysql > show master status; but this is unfortunately producing the following, rather than any useful output: Empty set (0.00 sec) The error log at /var/log/mysql.err is just an empty file, so that's not giving me any clues. Any ideas? This is what I have put in /etc/mysql/my.cnf on one server (amended appropriately for the other server): server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = 10.0.0.3 master-user = <myusername> master-password = <mypass> master-connect-retry = 60 replicate-do-db = fruit log-bin = /var/log/mysql-replication.log binlog-do-db = fruit And I have set up users and can connect from MySQL on Server A to the database on Server B using the username/password/ipaddress above.

    Read the article

  • HFS partition mounting read-only

    - by Sid
    Hey, I have an external Western Digital Hard drive with two HFS partitions with journaling disabled. When I connect it to a computer running Linux (Debian or Ubuntu), frequently both partitions are mounted read-only. In the past, mounting them on my Macbook and executing the command to disable the journaling often worked (even though it would tell me that journaling was already disabled) but I would love to have a solution which works every time. Thanks! Edit: In light of Chris Johnsen's comment below - my question is how to mount the filesystem read+write on Linux since it is not automatically doing so itself

    Read the article

  • Nginx PHP-FPM Basic Auth

    - by Lari13
    I have nginx with php-fpm installed on Debian Squeeze. Directory tree is: /var/www/mysite index.php secret_folder_1 admin.php static.html secret_folder_2 admin.php static.html pictures img01.jpg I need to close secret_folder_1 and secret_folder_2 with basic_auth. Now config looks like: location ~ /secret_folder_1/.+\.php$ { root /var/www/mysite/; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/mysite$fastcgi_script_name; include fastcgi_params; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } location ~ /secret_folder_1/.* { root /var/www/mysite/; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } Same config for secret_folder_2. Is it normal? I mean, first location for serving php files in restricted folder, and second location for serving static files. Can it be simplified?

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • Linux - Block ssh users from accessing other machines on the network

    - by Sam
    I have set up a virtual machine on my network for uni project development. I have 6 team members and I don't want them to SSH in and start sniffing my network traffic. I already have set the firewall on my W7 pcs to ignore any connection attempts from the Virtual Machine, but would like to go a step further and not allow any network access from the VM to other machines on my network. Team members will be access the VM by SSH. The only external port forwarded is to vm:22. The VM is running in VirtualBox on a bridged network connection. Running latest Debian. If someone could tell me how to do this I would be much obliged.

    Read the article

  • SQLite DB borked when opened on a different machine

    - by pruefsumme
    Hello, I'm using SQLite to store some data. The primary database is on a NAS (Debian Lenny, 2.6.15, armv4l) since the NAS runs a script which updates the data every day. A typical "select * from tableX" looks like this: 2010-12-28|20|62.09|25170.0 2010-12-28|21|49.28|23305.7 2010-12-28|22|48.51|22051.1 2010-12-28|23|47.17|21809.9 When I copy the DB to my main computer (Mac OS X) and run the same SQL query, the output is: 2010-12-28|20|1.08115035175016e-160|25170.0 2010-12-28|21|2.39343503830763e-259|-9.25596535779558e+61 2010-12-28|22|-1.02951149572792e-86|1.90359837597183e+185 2010-12-28|23|-1.10707273937033e-234|-2.35343828462275e-185 The 3rd and 4th column have the type REAL. Interesting fact: When the numbers are integer (i.e. they end with ".0"), there is no difference between the two databases. In all other cases, the differences are ... hm ... surprising? I can't seem to find a pattern. If someone's got a clue - please share!

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Replacement for NIS/YP

    - by mdpc
    The company that I am working for is embarking on replacing the current locally developed NIS/YP structure with LDAP. We already have AD in house for the Windows stuff and would like to consider using an AD system. The AD people are quite restrictive and would not support extensive modifications. We have needs to have the replacement include the support the full capabilities of the NIS/YP suite include netgroups, login restrictions to specific servers for specific users or groups of users, consistent passwords between the *nix and Windows environment,etc. Our environment is a mixture of Linux (suse, RH, Debian), Sun, IBM, HP and MPRAS as well as a NETAPP. So whatever we use must be totally inclusive to all environment. We have looked at Likewise, but our management wants other alternatives to compare with. What other things should I be looking at and what is you assessment of the alternative? Thanks

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Log with iptalbes which user is delivering email to port 25

    - by Maus
    Because we got blacklisted on CBL I set up the following firewall rules with iptables: #!/bin/bash iptables -A OUTPUT -d 127.0.0.1 -p tcp -m tcp --dport 25 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 25 -m owner --gid-owner mail -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 25 -m owner --uid-owner root -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 25 -m owner --uid-owner Debian-exim -j ACCEPT iptables -A OUTPUT -p tcp -m limit --limit 15/minute -m tcp --dport 25 -j LOG --log-prefix "LOCAL_DROPPED_SPAM" iptables -A OUTPUT -p tcp -m tcp --dport 25 -j REJECT --reject-with icmp-port-unreachable I'm not able to connect to port 25 from localhost with another user than root or a mail group member - So it seems to work. Still some questions remain: How effective do you rate this rule-set to prevent spam coming from bad PHP-Scripts hosted on the server? Is there a way to block port 25 and 587 within the same statement? Is the usage of /usr/sbin/sendmail also limited or blocked by this rule-set? Is there a way to log the username of all other attempts which try to deliver stuff to port 25?

    Read the article

  • How to fix windows new line character on sftp synchronization in eclipse (pdt)

    - by superspace
    Hello, I have a problem with windows new line characters being introduced into text files on eclipse sftp synchronization (via jcraft's sftp plugin). I've set "New text file line delimiter" to Unix and have even sanitized the file with "fromdos" but every time i upload using the sftp plugin, windows new line characters can be seen in the remote file as "^M" characters (when viewed in vi). A point to note is that if i upload using an external sftp client, it's all fine. Eclipse Version: PDT (Helios) SFTP: jcraft sftp plugin Local Environment: Ubuntu 10.04 Remote Environments: FreeBSD 6.4, Debian 4.0 What am i missing? My co-workers would thank you for the solution :) Thanks in advance.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Compilation of Etherpad fails in an OpenVZ VE

    - by ulf
    Hi everyone. I’m almost giving up, this will be my last try: I try to compile Etherpad on my OpenVZ server. It’s running a Debian 5.0 as the host system, in the VE I’ve got Ubuntu 10.04. I installed Etherpad in this VE with the instructions from the official Ubuntu Wiki: https://wiki.ubuntu.com/Etherpad. Everything runs fine until it comes to compilation. After calling bin/build.sh as described in the wiki the first steps are running fine. But then I’m running into a memory error: java.io.IOException: Cannot run program "cp": java.io.IOException: error=12, Cannot allocate memory Well, I understand the error message but don’t see the cause. The command free tells me that there’s plenty memory left in this VE: total used free shared buffers cached Mem: 2415236 1140872 1274364 0 0 0 -/+ buffers/cache: 1140872 1274364 Swap: 0 0 0 Beautiful. But even repeating the compilation process doesn’t bring me any further. Any help would be appreciated.

    Read the article

  • How can one create a bootable linux usb key that works on Mac (Intel 64 bit CPU) hardware ?

    - by user3621
    Hi, I'm trying to create a bootable usb key with linux (debian) and that can be booted on Macintel hardware. I have read that MAC's EFI can only boot GPT GUID formatted disks. I'm desperately trying to find a good tutorial which explains how to create such a key. Here what I have done so far: create a GUID partition on te key using linux GNU parted create a HFS+ or ext3 partition on the key, with the boot flag on install a linux .iso with unetbootin While all steps were successfull and in some cases I could even boot on a PC, the step of booting on Macintel software failed (on a macbook). I need to precise that I holded the "alt" key while booting the mac and the only visible bootable disk was the hard disk. Thanks for any advice. PS: I have tried with rEFIt as well. In one case I had a "windows" icon but it then failed to boot with a message like "no system found"

    Read the article

  • Conflicts with MS Office temporary files when using Offline Folders on Vista

    - by Tambet
    We are using Offline Folders feature of Windows Vista to make files on network shares available when out of office. Mostly it is working, but every time I do a sync I get a lot of such errors: D500E7B8.tmp - A file was deleted on this computer and changed on the server while this computer was offline. There are hundreds of them. I always select all of them and choose resolution "Delete from both locations". But what is causing this and how can I avoid it? I suspect the reason is that we are using Debian and Samba (3.4.7) on our file server. I've been looking for some Samba options that would cure this, but with no success. I learned that probably the cause is, that both Word and Excel are using specific pattern to change files - they never change the original file, but instead always write a new temporary file and rename it to original file, when you click Save. This is documented here: http://support.microsoft.com/kb/211632/?FR=1.

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • how to properly edit hosts, hostname and resolf.conf? [migrated]

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • Type 1 Hypervisor on the desktop

    - by Blazemore
    I have a powerful home PC, and I've used VirtualBox to run Linux distros in Windows (and vice versa). I'm interested in trying out a lightweight type 1 hypervisor to run all my operating systems (Windows 7, Debian, Arch) and was looking for suggestions of which to pick and how to implement this. From what I gather, a type 1 hypervisor is a lightweight OS which simply provides VM management functionality. Will I get reasonable performance under each guest OS? Can all the guest OSs have access to a shared data drive, or is is best to have a storage server in another guest OS and mount it over the virtual network? What about gaming, is this feasible, or will I realistically need to run Win7 on bare metal? I'd appreciate any input.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >