Search Results

Search found 10908 results on 437 pages for 'firefox sync'.

Page 231/437 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • Make Google Chrome's minimise, restore and close buttons look like other programs?

    - by TRiG
    I like the way Google Chrome puts the tabs above the address bar, but I don't like the way the minimise, restore, close buttons are a different shape to every other program's. It means that if I sit the mouse in the top corner and minimise everything, I find that I've restored Chrome, not minimised it. Is there any way to get these buttons to a normal shape and size? That's Firefox in front, looking normal, like every other program, and Chrome above and behind, with the buttons at an off-standard position and size.

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • Samba does not reload user group members

    - by xato
    I am running a simple samba server setup where users connect to a share which contains folders for specific user groups. The folders are chmod 2770, so only users which are in the correct group can read/write in them. The problem is that if I change group memberships (i.e. remove user from group / add user to group; changes are in sync between clients and server!) samba does not automatically reload the group memberships for the user, so they can still write to groups that they are no longer a member of etc. I either have to reconnect to the share or to restart samba to apply the changes. Is there any way to prevent group caching and/or enable group membership reload in samba? My smb.conf: https://gist.github.com/anonymous/ca7c10a3b3e2168d7a03

    Read the article

  • How to reliably synchronise file servers between London and Shanghai?

    - by Andy S
    We have two offices, one in London and one in Shanghai, each needing to be able to access the same set of files. This means we need a solid, speedy means of synchronising a set of folders between servers at either office. They're likely to be Windows servers, but we could look at Linux boxes if the software side makes more sense on *nix. We've considered Rsync, Unison, Gluster, and a few other options, but none of them seem capable of reliably keeping the servers in sync between such distant office locations. Each office is on DSL connectivity over the open internet, so encryption is also a factor. Does anyone have any hints for getting the servers synchronising in as close to real time as possible, without dying constantly? Andy

    Read the article

  • Gnome 3 - Unable to change date and time

    - by Chris Harris
    I am running Arch Linux with Gnome 3. Unfortunately, although my time and date settings in /etc/rc.conf show that HARDWARECLOCK='UTC' and TIMEZONE='America/LosAngeles'. I continue to get the timezone of Europe/London. If I try to change the date and time via the GUI. It requires root access. After authorizing root access, the date and time may be changed; however, after closing the GUI window, it automatically reverts back to the previous incorrect timezone. I am able to use pool.ntp.org in order to sync my time to the correct one; however, this works only for the current session and is not fixed. This solution is inconvenient since there is not always network access. What other solutions are available for this problem?

    Read the article

  • Poor quality when trying to stream a 720p video to XBox 360 using Media Center Extender

    - by MBraedley
    I have my XBox 360 set up as a Media Center Extender for my Windows 7 desktop. SD quality avi videos stream fine to my XBox, either though the video library or through Media Center Extender, but when I try a 720p mkv file, the frame rate plummets and the A/V sync is completely lost. I don't want to transcode or switch container formats (mkv isn't supported by the 360), but still want to stream. Both my desktop and 360 are plugged into the same gigabit switch, which is plugged into my ISP supplied modem/router. The video plays fine on my machine in a number of programs. Considering that I should have more than enough bandwidth to accommodate this video, why won't it play back properly?

    Read the article

  • RAID1: Migrate HDD to SSD?

    - by OMG Ponies
    My current workstation uses an Adaptec 5805, with Win2008 mirrored between two 72 GB (10K?) savvio drives. My question is if there's a way to migrate the mirror to use SSDs - I've been looking at 90GB Corsair Force (Sandy Bridge) to replace the existing setup. If it's possible, without installing the OS fresh. If I replaced the mirrored drive with an SSD, would the array sync the drives? Then I could promote the SSD mirror to be the primary, and use the second SSD as the mirror. That'd be too easy... Or should use Ghost to get an image of the existing setup, apply it to the SSD for a new mirror to be setup on?

    Read the article

  • Why do servers go down after a lot of traffic?

    - by mohabitar
    I'm working on an iOS app that makes extensive use of databases, where users will be able to sync their data to a server. However, I'm terrified of the event that if too many users start using the app, the servers will no longer be able to handle it. I'm not a server guy at all and am not too familiar with how that works, but my question is, why do servers get overloaded and how can that be prevented? Does it have to do with who my server host is? Or is it about the efficiency of my code? If my host is a reliable server, such as Amazon AWS, am I still at risk for server problems? Bottom line is, does it have to do with the way I implement my code, or does it have to do with who my host is?

    Read the article

  • Transfer Win8 user settings between profiles [closed]

    - by GlennFerrieLive
    Possible Duplicate: How do I sync grouped Windows Store apps between devices? Is there a way for me to copy/save/transfer my "start menu" configuration, meaning the grouping and ordering of the elements on the Start screen, between user profiles? Is it in the registry? I am open to manual or "coded" suggestions. UPDATE: I'd like to VETO this closing. I am aware of the "roaming" profile behavior. I want to COPY my configuration BETWEEN profiles on the same machine.... DIFFERENT profile DIFFERENT person. I like the way my start screen is set up. i want to set my wife up with the same layout.

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Why would my 15" MacBook Pro suddenly ask me to Hard Quit while I was in a browser?

    - by flathead27ford
    I was in Firefox looking at a search I had performed in eBay and all of a sudden, the screen grayed out starting from the top down. In the middle of the screen was a box with about 5 different languages saying I had to hard quit my computer. I tried other keys on my computer and nothing was working so I hard quit and restarted. Can anyone tell me why this happened? Cheers, Kyle

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • windows 2008 server move users to new server

    - by moos3
    I have a new server that is replacing a current windows 2008 server r2. I want to move all the local users and IIS sites to the new box. Is there away to export the two and import them on the new box? I have sync'd all the files for all the sites to the new box. This box doesn't belong to a domain so its not a matter of joining to the domain. The users I'm talking about are the local computer users.

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • Anybody know how to get podcasts sorted correctly on a Zune?

    - by Will
    If I could understand the logic behind it I could actually manage to get my stuff sorted correctly on the device. It doesn't matter how things are sorted on disk, or in the software itself. As items are added to the sync list they end up sorted in some weird manner which, quite frankly, brings out an anger in me that scares me. A primitive, misshapen and unthinking anger. Kind of like the lumpy guy in Altered States. Frankly, I'm afraid one day its going to burst forth and I will track down the entire Zune team and make them pay for their failures. Help me prevent this from happening. Help me understand what I need to do to ensure my podcasts end up in the order I wish them to be in, not some obscure random order that makes no sense.

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Missing menu items for Azure SQL tables within SQL Server Management Studio?

    - by Sid
    I have a table (say Table1) that is replicated via SQL Data Sync Agent across a local SQL Server 2012 as well as an Azure SQL Server (part of Microsoft Azure). Everything about Table1 (schema, table values etc ) is identical to the best of my understanding. However, when I list and right click Table1 from Microsoft SQL Server Management Studio 2012 (SSMS), I get some very different menu options, even for seemingly basic stuff. Lets focus only on the 'Design' menu item: It is visible for Table1 on the local SQL server in SSMS It is missing for Table1 on Azure SQL via SSMS It is visible for Table1 (as Open Table Definition) on Azure SQL when reaching it via Visual Studio 2012 (Server Explorer - Data connections) This is seen in the screenshots below: Now I use scripts from some real stuff (esp when I need to check in the SQL scripts etc) but this difference concerns me to some extent. Am I witnessing just a tools artifact in SQL Server Management Studio when connecting to Azure SQL? or is it something more serious about limitations of Azure SQL itself (although, just seeing the Design surface is so basic!)?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >