Search Results

Search found 29040 results on 1162 pages for 'ubuntu tweak'.

Page 353/1162 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • change default userid for connecting to local AFP share?

    - by Stew
    I've got Netatalk & Avahi running on a local Ubuntu server--I use two different userids, "afp" for Time Machine and "stew" to access my media files etc. In order to mount a shared directory on my server, I have to click "Connect As..." and enter my userid/password every time, because it always tries to log in using Time Machine's userid. I'm not sure if this is because that userid is set as default, or just because it's the last userid that logged in to that server--either way: Is there a way to change the default userid for connecting to a given server? Mega extra credit: I'd love to have this automated, such that my userid, "stew", is always logged in (and heck, it'd be great to have the directories always mounted, too!) whenever the server is available. Thanks!

    Read the article

  • Configuring linux server firewall to allow access from a certain range of IP addresses

    - by eggman20
    Hi Guys, I'm new to linux server. I'm currently trying to get an Ubuntu 10.10 server up and running for the first time and I'm using Webmin for administration. I'm stuck on the setting up the firewall. What I need to do is to ONLY allow a range of IPs (e.g 128.171.21.1 - 128.171.21.100) to access the HTTP server and Webmin. I've seen a lot of tutorials but none of them fits what I needed. Thanks in advance!

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • Compiling LAMP from source - apache2 error “no MPM package installed”

    - by kenny99
    Hi, I've compiled LAMP from source on a Ubuntu VPS. I had to remove a previously installed version of Apache then I manually compiled all the packages, which seems to have worked up unto a point - however, when I try to run commands like "/etc/init.d/apache2 restart" I get the following error - No apache MPM package installed. I have installed mpm-prefork so I don't know why i'm getting this problem. My configure command is as follows: ./configure --enable-so --enable-modules=most --with-mpm=prefork I have deliberately not used apt-get to install anything and want to avoid this if possible. Anyone have any guidance on how to resolve this error? Thanks in advance

    Read the article

  • How to let users change linux password from web browser?

    - by wag2639
    I'm not sure if this is a stackoverflow question or serverfault but here goes: I have an Ubuntu 10.04 file server (Samba/FTP/HTTP) and I would like to have the ability to give users the ability to change their password to the server using their web browser. I've written a similar script before using PHP and a mess of exec but I believe that isn't secure because it can be listened to by someone looking at the list of processes on the server. Is there some kind of plugin (PHP or Python or other) that can do this easily? I rather not use something like webmin as it's overkill for this.

    Read the article

  • Samba PDC plus universal folder

    - by skids89
    I know how to configure samba on my ubuntu box to become a PDC however I need some select files to be accessible to multiple users. These files are beyond their personal files. I.E. users A-C need to be able to access a schedule saved as a spreadsheet. But user D does not and users B-D need to be able to access confidential employee info but user A does not. How do I set this up on top of the PDC structure? Any video tutorials would be a plus. Im new to linux so documentation is a confusing slow slog to learn. Thanks so much in advance!

    Read the article

  • Enable Ctrl (or Alt) + arrow keys to mimic 'home' and 'end' functionality

    - by YuKagi
    I am a long time Mac user and I'm now using a Ubuntu machine for development, and while I'm more or less used to a lot of the keyboard shortcuts, one thing I can't get used to is using the 'Home' and 'End' keys to move around lines of text. On a Mac you use "Command + right arrow" to go to the end of a line and "Command + left arrow" to go to the beginning. Is there a way to enable this kind of functionality in Linux? I'm not sure if this would be considered remapping, keyboard shortcuts, or what...

    Read the article

  • Cannot type backquote or backtick in xterm

    - by Cocoro Cara
    Ubuntu 10.10, XTerm(261), Keyboard layout = Canadian Somehow, the backquote (backtick = `) character can't be input does not get entered in XTerm. I type it and nothing happens. The cursor does not move forward. I know it works because I can input it in Terminal (gnome-terminal). The only strange thing is that I have to type the key twice for it to appear. Just to test it, I tried typing it in other applications, and the same thing happens. Have to type it twice in FF, gedit, etc. One more strange thing, I could not input it into this textbox in which I am typing this message. But I can input it in the URL bar, search bar, etc. Someone please help me solve this mystery. I like to use XTerm and I need the backquotes.

    Read the article

  • Server memory issues, and expected level of service from hosting company

    - by Greg
    I'm involved in maintaining an Ubuntu VPS which runs our django websites (nginx/apache/mod_wsgi) and we've been having some memory spikes which have either caused the database to die, or induced kernel panic when the memory management system can't find any killable processes. I'm working on fixing the memory spikes, but I'm wondering whether there's anything I can do to better deal with the problem if it occurs again. Are there any tools I could use to detect the memory spikes and then, say, kill the offending process and email the server admin to fix it up? Killing off one website so that the server can remain operational is certainly preferable to the whole thing falling over. Also, we were charged $600 for after-hours service because we had to get the hosting company to restart the server - is this standard practice among hosting companies? Another provider I work with provides a panel with which I can stop and start the server myself, and given that a restart was all that was needed, $600 seems mightily excessive. (That's NZD, it's around $445 USD)

    Read the article

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • Multiple SSH private keys for the same host

    - by Sencha
    How can I store 2 different private SSH keys for the same host? I have tried 2 entries in /etc/ssh/ssh_config for the same host with the different keys, and I've also tried to put both keys in the same file and referencing it from one hosts setting, however both do not work. More detail: I'm running Ubuntu server (12.04) and I want to connect to GitHub via SSH to download the latest source for my projects. There are multiple projects running on the same server and each project has a GitHub repo with it's own unique deloyment key-pair. So the host is always the same (github.com) but the keys need to be different depending on which repo I'm using. Different /etc/ssh/ssh_config versions I have tried: Host github.com IdentityFile /etc/ssh/my_project_1_github_deploy_key StrictHostKeyChecking no Host github.com IdentityFile /etc/ssh/my_project_2_github_deploy_key StrictHostKeyChecking no and this with both keys in the same file: Host github.com IdentityFile /etc/ssh/my_project_github_deploy_keys StrictHostKeyChecking no I've had no luck with either. Any help would be greatly appreciated!

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Text Formatting toolbar continuously disappears in Impress (open office)

    - by Davide
    This is a very weird and annoying problem. Not sure if it's a bug or a "feature" I'm using OpenOffice 3.2 (within Ubuntu 10.04). The Text Formatting toolbars disappear in many circumstances, e.g. each time I click out of a writing area. It's becoming very time consuming to go to View-Toolbars-TextFormatting to re-enable it each time. 3 questions: is this expected behavior, and if so, is there any setting where I can change it? (note this wasn't happening in the past with other presentation I made)? is there a workaround, such as defining a shortcut like CTRL-whatever that would make the toolbar appearing without menu joggling? is anybody experiencing this too, especially someone using LibreOffice?

    Read the article

  • virtualbox and nginx server_name

    - by Ivan
    I'm trying to configure gitlab running in an Ubuntu 12.04 guest with Windows7 host. I can ssh the guest using port-forwarding and access the nginx server using port redirection (8888 in host is 80 in guest, so localhost:8888 in host gets to the nginx server in the guest), but the server_name in nginx configuration file is giving me trouble. What is the correct listen and server_name that nginx would accept? The guest has the NAT interface at 10.0.2.15 and Host-Only interface at 192.168.56.101, static. Thanks!

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • PHPMyAdmin HTTP auth works, but not cookie auth

    - by ssmy
    I'm running PHPMyAdmin version 3.3.2 on Ubuntu 10.04, fully updated. Recently, the authentication for PHPMyAdmin stopped working. It would return the error 1045. However, login on the command line still worked. I switched to HTTP authentication instead of cookie auth, and now it works fine. Any ideas why this could be, or what I could do to make cookie auth work again? (Partly just to know, and partly since it's a bit nicer).

    Read the article

  • A raw dump from a corrupted large file

    - by Masoud M.
    I have a large .rar file inside partion D (Windows7/NTFS). It's corrupted due to bad sector (I think) and when I copy it to another place (External-HDD) the system freezes after 88% of progress. I even tried to copy it with my Ubuntu and same problem occurred. Also I tried chkdsk and it dosen't fix it. I think my last chance is dump that file with a tool which ignores bad sectors and create a raw copy of it. Then I will repair the file with rar tools. But I can not found a tool to raw dump a specific file. (In linux there is dd tool but it dumps all partition and I can not use it) So, Is somebody know a tool to do a raw dump from a file?

    Read the article

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • GParted tells me my partition has 1.30 GiB used space but I cannot access its contents

    - by reprogrammer
    I've a ext4 partition (/dev/sda7) for my Linux. And, another (/dev/sda5) for keeping my data. When I installing Ubuntu 10.04 LTS, I set the mount point of /dev/sda5 to "/" and that of /dev/sda5 to "/data". GParted tells me that 1.30 GiB out of 70.12 GiB of /dev/sda5 has been used up. But, the mounted directory "/data" is empty. So, it looks like that my data is there but I cannot access it. Besides, when I set the mount point, I didn't check the "format" box. So, it shouldn't have been formatted. How can I check whether the partition has been formatted? How can I recover my files?

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • Ethernet port sleeping on PS3 running linux

    - by Doug
    My lab has a PS3 running Ubuntu Linux 9.04 Server Edition. After a period of a few hours with no use, the Ethernet connection (eth0) seems to go to sleep, causing the connection to be lost. Pinging or trying to SSH into the machine results in no response. The fix I've been using is to access the machine locally and restart it (trying to bring eth0 down then up doesn't seem to correct it). I've tried setting up an hourly cron job that runs on the PS3 and pings another machine just to create network activity, but this doesn't seem to solve the problem either. Update: The solution was to run the above cron job much more frequently: every 10 minutes works.

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • Cannot acess the new cloned server even after new IP address assignment

    - by tough
    I was able to clone a Ubuntu 10.04 server residing in Cloud. It appeared that I was not getting some IP for the new VM so I followed some of these: # cd /etc/udev/rules.d # cp 70-persistent-net.rules /root/ # rm 70-persistent-net.rules # reboot I didn't follow the later commands as I was unable to see two eth MACs as available in the referenced site. After this I am able to see some the IP for it, and is different form the original IP, I have added new IP to DNS server. Now when I try to access it with its assigned(new) domain it is directed to the old server. I can see both the VMs running with different IP. Where I might have gone wrong, I am new to this admin thing.

    Read the article

  • Apache Subversion and Sudo - Why can't I resolve this hostname?

    - by Hollowsteps
    Okay, I made a mistake and I'll be the first to admit I'm new at this setup. I built a bare bones kit, installed Ubuntu on it, and attempted to set up a source control server for a project some friend and I were going to work on. Unfortunately, I screwed up. I followed a dodgy tutorial from 2005 and when it didn't work, started mixing and matching trying to get to the source of my problem. So now I sit before you, a broken and miserable man. Desperate to escape this annoying echo of 'Unable to resolve host computer.repositoryname.com', I uninstalled apache and subversion. That did not fix it. Next I tried to edit my /etc/hosts, going so far as to remove the reference to '127.0.1.1 computername'. Still I'm plagued. I know I messed up, is there any way to track down this wayward bug?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >