Search Results

Search found 10908 results on 437 pages for 'firefox sync'.

Page 293/437 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

  • Vista laptop 1 connects to network but Vista laptop 2 doesn't

    - by Jaips
    (Sorry if this is a dulipcate. I did a hunt but couldn't find matching question) We recently got a new router for our network (thomson TG585v8). It was already pre-configured by the ISP and was easy to setup. Both Vista laptops in this home network connected without trouble as well a iPod touch, all via wifi. In the last two days one of the laptops has been unable to connect cleanly to the network (connects as unidentified network). The other laptop, and iPod have not had any issues. I can't think of anything that has changed to make this happen now. I have rebooted the laptop and the router. I have updated the laptop wireless driver I have checked the laptop is set to automatically get IP I have logged into the router which correctly identifies all three wireless devices. The problem laptop connects via LAN without issue. Some things that may or may not matter: The problem laptop also sometimes uses a 3G dongle Both laptops use Windows Live Mesh to sync folders Laptop is usually set to go to sleep NB: it seems this is an intermittent issue, after an hour (since i noticed it) it seems to have fixed itself. Would love ideas though to solve this problem for good.

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Raid1 with active and spare partition

    - by Daniel Baron
    I am having the following problem with a RAID1 software raid partition on my Ubuntu system (10.04 LTS, 2.6.32-24-server in case it matters). One of my disks (sdb5) reported I/O errors and was therefore marked faulty in the array. The array was then degraded with one active device. Hence, I replaced the harddisk, cloned the partition table and added all new partitions to my raid arrays. After syncing all partitions ended up fine, having 2 active devices - except one of them. The partition which reported the faulty disk before, however, did not include the new partition as an active device but as a spare disk: md3 : active raid1 sdb5[2] sda5[1] 4881344 blocks [2/1] [_U] A detailed look reveals: root@server:~# mdadm --detail /dev/md3 [...] Number Major Minor RaidDevice State 2 8 21 0 spare rebuilding /dev/sdb5 1 8 5 1 active sync /dev/sda5 So here is the question: How do I tell my raid to turn the spare disk into an active one? And why has it been added as a spare device? Recreating or reassembling the array is not an option, because it is my root partition. And I can not find any hints to that subject in the Software Raid HOWTO. Any help would be appreciated.

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • MD RAID 1 with external bitmap doesn't fully resync

    - by user64744
    I have an interesting configuration: dual boot system with a RAID 1 that needs to be visible in both Windows and Linux. The Windows install is Win 7 Enterprise, and the Linux install is Kubuntu 10.04. To get the RAID to work, I set it up using Windows's "Dynamic Disks" RAID 1, and brought it up in Linux using MD with no persistent superblock, and a write-intent bitmap on another partition. (Without this bitmap, MD had no way of knowing that the array was in sync, and would do a complete resync every time the array started.) The array is assembled like so: mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 I expected that the first time I ran this command, it would resync the array, write out a bitmap with no dirty chunks, and all would be good. This wasn't the case: after completing the resync, the bitmap was mostly clean, but about 5% dirty blocks remained, as revealed by mdadm -X /var/local/md1.bitmap I didn't mount the filesystem on /dev/md1 or touch it in any other way. I then found that stopping and restarting the array: mdadm --stop /dev/md1 mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 did indeed read in the bitmap, with an ensuing resync that went quickly because most of the blocks were marked clean. The confusing part is that this resync further reduced the number of dirty blocks, but still did not remove all of them. By repeatedly stopping and restarting I could slowly bring the dirty block count down to around 0.6%, where it seemed to level out. Any ideas what could be causing this? It smells to me of a race condition somewhere that leads to blocks either being skipped over during synchronization or not properly cleared from the bitmap, but I really have no evidence to prove this. It doesn't look like hardware issues since both drives are new and have zero read errors and reallocated sectors reported by smartctl -a.

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • NFS Datastore Appears Empty!

    - by daemonchild
    Hi guys, I've got an NFS server problem. The datastore connected and seems to be a valid datastore in both the vSphere client and under /vmfs/volumes. The issue is that it appears to be empty! I can create files (eg: touch /vmfs/volumes/nfs_common/thefile) and it is correctly written to the nfs store. I can verify this by looking on the nfs server itself. But the vmkernel only sees an empty datastore; the file disappears. Another freebsd box can mount the same NFS share and see the files correctly. Some useful data: ESXi 4.0.0 Build 208167 NFS is unfsd running on a Buffalo Linkstation Pro Duo (a bit hacky I know). The share has file system permissions set to 777 at the moment. My /etc/exports is as follows, and as I say it connects fine. /mnt/array1/ESX_Shared 192.168.16.0/255.255.255.0(insecure,rw,sync,no_root_squash,no_subtree_check) The ESXi servers can also successfully mount NFS shares from other NFS servers. Any ideas guys? Thanks, Tom

    Read the article

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • DNS: how to get local server to superimpose results over authoritative server?

    - by growse
    I've got a domain for which the DNS I control, and is hosted on the internet. I also have a NAT'd internal network (192.168.0.0/24) which has internet access, and which I also control. On this internal network, I also have a DNS resolver. DNS software on both is PowerDNS. What I want to be able to do is for the DNS resolver on the internal network to be able to add/change records of queries and results that come down from the authoritative server. For example, the authoritative server might have a single record for animal.example.com: animal.example.com. IN AAAA 2001:140:283::1 However, I'd like it so that when internal clients do a dns lookup for animal.example.com, they might get back the following: animal.example.com. IN AAAA 2001:140:283::1 animal.example.com. IN A 192.168.0.2 Obviously, I could set up the internal DNS server to pretend to be authoritative for example.com, but that would require a fair bit of effort to keep the main DNS server and the internal DNS server in sync for the records which are the same between both. If the internal DNS server could somehow be made a slave of the main DNS server, but also have the provision to add its own results in, that would be ideal. Is this possible?

    Read the article

  • How do I collect SNMP readings from intermittently-connected sites?

    - by Luke404
    I am collecting SNMP data on-site for a number of systems, currently using Cacti. These systems are spread on a number of sites that aren't always connected to internet, but I also need to centralize the data on a single system (datacenter housed server) and get graphs out of it. If I directly poll remote systems with a centralized Cacti I'd loose data when a site is not connected to internet. I should record data on-site (I have a server at each site and I can run whatever I want on it) and then 'sync' everything to the central system. One hack could be a cacti or directly an rrdtool on site and then periodically rsync RRD data to the central Cacti system, but that doesn't sound like a 'clean' solution: every RRD would have to be defined at both places and rsync scripts setup with the specific file names. Can you suggest a better solution? Cacti is not a requirement but I'd like to use something like that on the central system. On-site systems need only to collect data I don't need to graph it there or manage users rights to view data and stuff like that, users will only access the centralized system.

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • Having trouble mapping Sharepoint document library as a Network Place

    - by Sdmfj
    I am using Office 365, Sharepoint Online 2013. Using Internet Explorer these are the steps I have taken: ticked the keep me signed in on the portal.microsoftonline.com page. It redirects me to Godaddy login page because Office 365 was purchased through them. I have added these sites to trusted sites (as well as every page in the process) and chose auto logon in Internet explorer. Once on the document library I open as explorer and copy the address as text. I go to My Computer and right click to add a network place and paste in the document library address. It successfully adds the library as a network place 30% of the time. I can do this same process 3 times in a row and it will fail the first 2 times and then succeeds. It works for a little while and then I get an error that the DNS cannot be found. I need multiple users in our organization to be able to access this document library as if it was a mapped network drive on our local network. Is there an easier way to do this? I may just sync using the One Drive app but thought that direct access to the files without worrying about users keeping their files synced.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • DRBD Replication failure

    - by user62513
    A couple of weeks ago I setup a 2 nodes CRM system with one of the resources managed being MySQL over DRBD. Today for maintenance reasons I restarted both nodes but now they can't connect to each other anymore. DRBD fell out of sync and I followed this guide to get it back connected but it's only able to run successfully on one node. But this strange thing happens: If I crm node standby both nodes and I try: crm node online node0 before crm node online node1, all the CRM resources start successfully but the DRBD partitions are still running in StandAlone state. crm node online node1 beofre crm node online node0, the DRBD resource fails to start, thus causing mysql not to start. If I standby both resources and call crm node online node0 then it times out and prints this error: Running crm node online node0 produces this output after timing out Error setting standby=off (section=nodes, set=<null>): Remote node did not respond Error performing operation: Remote node did not respond Is there anything I'm doing wrong here? An alternative will be just do MySQL replication but I'm not sure how to promote a slave to master when the master database is not available.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >