Search Results

Search found 13164 results on 527 pages for 'missing'.

Page 359/527 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Windows 8 folder to folder sync software

    - by Danny
    I'm looking for direct folder to folder synchronization in Windows 8. I was previously using Live Mesh to accomplish this, but now it looks like that is no longer an option. Note that I'm talking about direct folder to folder sync between different computers, not syncing to the cloud. I'm aware of products like Google Drive, SkyDrive, Dropbox, etc. The problem with them is the space limitation. Basically, I was syncing important files before between my desktop and all of my laptops. One folder for example is My Pictures. This folder has almost 40 gigs of files, which is why the options listed above are not going to work for me. Just need direct syncing, nothing stored on the cloud. I was told by a Microsoft employee that SkyDrive would be replacing Mesh and would provide all the same functionality. So far this looks to be completely false, since the ability to remote desktop is gone along with folder to folder sync. Unless I'm just missing something?

    Read the article

  • vyatta Server Reboots by itself

    - by Fernando
    I have an issue regarding some hardware, maybe you can help me. First, I set up a Supermicro Superserver SYS-5016I-NTF with a Intel Xeon X3470 and 4 GB of Ram with a Hotlava Card Tambora 64G4 with Intel Chipset 82599EB and 4x10G SPF+ ports. Installed Vyatta community edition 6.3. I used it as router making BGP connections with 2 operators. No load at all, temp ranges normal. But the issue is that it reboots by itself in a ramdom way. Not very often, once every few days. But it is unacceptable for production purposes. So I try to test on different hardware, and installed Vyatta community edition 6.3 on a Dell PowerEdge 2950, with Xeon(R) E5345 @ 2.33GHz and 4 GB of Ram. Same Vyatta configuration as Supermicro Server. With same hotlava Card model ( I bought two of them ) Well I have reboots with this equipment as well. Same frecuency as above. I have checked syslog no strange logs until boot process starts to be logged. So it seems server reboot suddenly. I have installed latest driver for the chipset of the Hotlava card. Servers are placed in a datacenter with UPS So finally two things in common in both servers: Hotlava Card. Someone with issues with this card, or the chipset?? Could be it this card?? Vyatta 6.3 community edition. I don't thing is the problem. Is a regular Debian with packages to glue together different services. Or maybe is something I am missing. Andy ideas, suggestions?? Thank you very much... Fernando

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • How to make network drives appear even if disconnected?

    - by Jake
    I have the same problem as many others: network and home drives set by group policy and AD are not connected on windows startup. The prime suspect is that the LAN or wireless does not connect until after user log in. I have already given up on that. Now, I just want the disconnected drives to continue to list in My Computer so that if the user goes in and double click the drive, it will connect again. However, on some machines the drive is completely missing from My Computer. If I right click My Computer Map Network Drive again, it does work. But it's very troublesome to do it all the time. And I don't want to use a script to map the drives because I don't want to appear to be using a hacky solution to the users. The drives listed as disconnected will look more like a "built-in feature", and gives users more confidence. How can I keep the disconnected drives in My Computer? I am using Windows 7 Professional and Win2k8.

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • backup of TFS with DPM 2007 - different backup times

    - by user46516
    I have DPM set up to back up my TFS server every 30 minutes, the reason being it's a way better interface than the quirky SQL backup interface. I also do a full backup nightly using an SQL maintenance job. My thinking is I would use DPM to restore my databases in case of losing my database and the nightly full backup would be a "just in case" the DPM restore doesn't work. I was thinking a little harder about this set up today and started to think about the fact that the DPM backup of the individual databases happens at different 30 min windows.. i.e. one happens at 13h30, another at 13h34 etc. Would this difference in time be a problem when it comes to restoring the TFS server? If I restore the databases and they are from different times, will this create corruption with pointers in one database pointing to missing items in the other database.. do the databases even rely on each other or are they completely interdependant. Lastly, how would SQL (log) backup cope with this?

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • How do I redirect my website from non-www to WWW using Apache2?

    - by Andrew
    I'm currently trying to set up my personal webpage. I am using a VPS and have manually installed Wordpress, and everything seems to work... except if I go to the non-www version of my website, it comes up with a page not found. www.andrewrockefeller.com <-- Works andrewrockefeller.com <-- Does not (and I want to redirect it to www.andrewrockefeller.com) I have tried adding RewriteEngine functionality to my .htaccess, and that isn't working. I have also tried adding the 'most-voted' method of adding to my default file (which apache2.conf pulls from: <VirtualHost *> ServerName andrewrockefeller.com Redirect 301 / http://www.andrewrockefeller.com/ </VirtualHost> Seeing how many people are able to get the above working, is there something else I may be missing to allow that to function? Thank you for your time! EDIT: My .htaccess file is as follows: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The #Wordpress section was autocreated when I changed the settings from ?p=1 (ugly links) to prettylinks. Any proposed solutions I've found on here I've tried out and restarted apache2, and it hasn't worked.

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • Use Apache authentication to Segregate access to Subversion subdirectories

    - by Stefan Lasiewski
    I've inherited a Subversion repository, running on FreeBSD and using Apache2.2 . Currently, we have one project, which looks like this. We use both local files and LDAP for authentication. <Location /> DAV svn SVNParentPath /var/svn AuthName "Staff only" AuthType Basic # Authentication through Local file (mod_authn_file), then LDAP (mod_authnz_ldap) AuthBasicProvider file ldap # Allow some automated programs to check content into the repo # mod_authn_file AuthUserFile /usr/local/etc/apache22/htpasswd Require user robotA robotB # Allow any staff to access the repo # mod_authnz_ldap Require ldap-group cn=staff,ou=PosixGroup,ou=foo,ou=Host,o=ldapsvc,dc=example,dc=com </Location> We would like to allow customers to access to certain subdirectories, without giving them global access to the entire repository. We would prefer to do this without migrating these sub-directories to their own repositories. Staff also need access to these subdirectories. Here's what I tried: <Location /www.customerA.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerA Require user customerA </Location> <Location /www.customerB.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerB Require user customerB </Location> I've tried the above. Access to '/' works for staff. However, access to /www.customerA.com and /www.customerB.com does not work. It looks like Apache is trying to authenticate the 'customerB' against LDAP, and doesn't try local password file. The error is: [Mon May 03 15:27:45 2010] [warn] [client 192.168.8.13] [1595] auth_ldap authenticate: user stefantest authentication failed; URI /www.customerB.com [User not found][No such object] [Mon May 03 15:27:45 2010] [error] [client 192.168.8.13] user stefantest not found: /www.customerB.com What am I missing?

    Read the article

  • Postfix "loops back to myself" error on relay to another IP address on same machine

    - by Nic Wolff
    I'm trying to relay all mail for one domain "ourdomain.tld" from Postfix running on port 2525 of one interface to another SMTP server running on port 25 of another interface on the same machine. However, when a message is received for that domain, we're getting a "mail for loops back to myself" error. Below are netstat and postconf, the contents of our /etc/postfix/transport file, and the error that Postfix is logging. (The high bytes of each IP address are XXXed out.) Am I missing something obvious? Thanks - # netstat -ln -A inet Proto Recv-Q Send-Q Local Address Foreign Address State ... tcp 0 0 XXX.XXX.138.209:25 0.0.0.0:* LISTEN tcp 0 0 XXX.XXX.138.210:2525 0.0.0.0:* LISTEN # postconf -d | grep mail_version mail_version = 2.8.4 # postconf -n alias_maps = hash:/etc/aliases allow_mail_to_commands = alias,forward bounce_queue_lifetime = 0 command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 default_privs = nobody default_process_limit = 200 html_directory = no inet_interfaces = XXX.XXX.138.210 local_recipient_maps = local_transport = error:local mail delivery is disabled mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/local/man message_size_limit = 10240000 mydestination = mydomain = ourdomain.tld myhostname = ourdomain.tld mynetworks = XXX.XXX.119.0/24, XXX.XXX.138.0/24, XXX.XXX.136.128/25 myorigin = ourdomain.tld newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /etc/postfix recipient_delimiter = + relay_domains = ourdomain.tld relay_recipient_maps = sample_directory = /etc/postfix sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtpd_authorized_verp_clients = $mynetworks smtpd_recipient_limit = 10000 transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 450 # cat /etc/postfix/transport ourdomain.tld relay:[XXX.XXX.138.209]:25 # tail -f /var/log/maillog ... Aug 2 23:58:36 va4 postfix/smtp[9846]: 9858A758404: to=<nicwolff@... >, relay=XXX.XXX.138.209[XXX.XXX.138.209]:25, delay=1.1, delays=0.08/0.01/1/0, dsn=5.4.6, status=bounced (mail for [XXX.XXX.138.209]:25 loops back to myself)

    Read the article

  • Joomla SMTP Configuration Issue

    - by msargenttrue
    I'm having an issue with the SMTP setup of my Joomla website when trying to send mass emails through the CB Mailing (Mass Email) extension. I receive this error: SMTP Error! The following recipients failed: Number of users to whom e-mail was sent: 0 (Total in list: 1) The old version of this websites mass emailer worked fine, however, in order to add Kunena Forum and maintain compatibility I had to make several upgrades to the site. Both the new version and old verson configurations are outlined below. Server for Website: Mac OS X Server 10.4.11, Apache 1.3.4.1, PHP 5.2.3, MySQL 4.1.22 Server for SMTP: Eudora Internet Mail Server 3.3.9 (EIMS Server X) New Configuration: Joomla 1.5.25, Community Builder 1.7.1, CB Paid Subscriptions (CB Subs) 1.2.2, CBMailing 2.3.4, Kunena Forum 1.7.0, Legacy 1.0 plug-in disabled Mail Settings (New Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Security: None SMTP Port: 25 SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 Old Configuration (Working SMTP Configuration): Joomla 1.5.9, Community Builder 1.2, CB Paid Subscriptions (CB Subs) 1.0.3, CB Mailing 2.1, Legacy 1.0 plug-in enabled Mail Settings (Old Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 (Notice how the older version of Joomla is missing the 2 fields: SMTP Security and SMTP Port) Thanks in advance!

    Read the article

  • cannot commit svn with dav on ubuntu

    - by hiddenkirby
    So there are several similar questions on serverfault ... but the solution is still eluding me. I am running subversion on ubuntu 9.04 .. through apache2.2.x .... i get Commit failed (details follow): Can't make directory '/home/kirb/svn/dav/activities.d': Permission denied when i attempt to commit. It is deffinitely a permissions issue... but how to fix it is still eluding me. my repository is in /home/kirb/svn. http://serverfault.com/questions/61573/svn-commit-error says to chgrp .. but i dont seem to be able to. all the apache dav stuff seems to be working though. I can access my repository just fine through a browser. apologies if i am missing something simple here. Thanks in advance, Kirb additional edit: i am not able to sudo chgrp on the directory at all sudo chgrp -R www-data /home/kirb/svn; chmod -R g+rwx /home/kirb/svn [sudo] password for kirb: chmod: changing permissions of/home/kirb/svn': Operation not permitted chmod: changing permissions of /home/kirb/svn/format': Operation not permitted chmod: changing permissions of/home/kirb/svn/conf': Operation not permitted chmod: cannot read directory /home/kirb/svn/conf': Permission denied chmod: changing permissions of/home/kirb/svn/locks': Operation not permitted chmod: cannot read directory /home/kirb/svn/locks': Permission denied chmod: changing permissions of/home/kirb/svn/db': Operation not permitted chmod: cannot read directory /home/kirb/svn/db': Permission denied chmod: changing permissions of/home/kirb/svn/README.txt': Operation not permitted chmod: changing permissions of /home/kirb/svn/hooks': Operation not permitted chmod: cannot read directory/home/kirb/svn/hooks': Permission denied`

    Read the article

  • Why I cannot copy install.wim from Windows 7 ISO to USB (in linux env)

    - by fastreload
    I need to make a USB bootable disk of Windows 7 ISO. My USB is formatted to NTFS, ISO is not corrupt. I can copy install.wim elsewhere but I cannot copy it to USB. I even tried rsync. rsync error sources/install.wim rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "/media/52E866F5450158A4/sources/install.wim": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.8] Stat for windows.vim File: `X15-65732 (2)/sources/install.wim' Size: 2188587580 Blocks: 4274600 IO Block: 4096 regular file Device: 801h/2049d Inode: 671984 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ umur) Gid: ( 1000/ umur) Access: 2011-10-17 22:59:54.754619736 +0300 Modify: 2009-07-14 12:26:40.000000000 +0300 Change: 2011-10-17 22:55:47.327358410 +0300 fdisk -l Disk /dev/sdd: 8103 MB, 8103395328 bytes 196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 32 15826943 7913456 7 HPFS/NTFS/exFAT hdparm -I /dev/sdd: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Model Number: UF?F?A????U]r???U u??tF?f?`~ Serial Number: ?@??~| Firmware Revision: ????V? Media Serial Num: $I?vnladip raititnot baelErrrol aoidgn Media Manufacturer: o eparitgns syetmiM Standards: Used: unknown (minor revision code 0x0c75) Supported: 12 8 6 Likely used: 12 Configuration: Logical max current cylinders 17218 0 heads 0 0 sectors/track 128 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY(may be)(cannot be disabled) Queue depth: 11 Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 0 Current = ? Recommended acoustic management value: 254, current value: 62 DMA: not supported PIO: unknown * reserved 69[0] * reserved 69[1] * reserved 69[3] * reserved 69[4] * reserved 69[7] Security: Master password revision code = 60253 not supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 71112min for SECURITY ERASE UNIT. 172min for ENHANCED SECURITY ERASE UNIT. Integrity word not set (found 0xaa55, expected 0x80a5)

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • How do I install git/git-svn on RHEL5 with a custom perl install?

    - by kbosak
    I've had nothing but trouble trying to install Git on RHEL5. First I tried from source, but ran into several issues with installing the docs. There appeared to be missing libs and such for parsing xml that I couldn't figure out how to get installed and recognized. Then I tried using the EPEL yum repository and was able to install git and its docs but now git-svn is not working. It complains about not finding the perl modules Git.pm and SVN/Core.pm. When I set the GITPERLLIB environment variable to the location of those libs it seg faults. Some background: RHEL5 came with perl 5.8.8, but we wanted to use 5.10 so I installed that from source (to a custom location). Someone then symlinked the system perl binary to this newer version of Perl to make sure nobody uses the wrong version. Each developer also has their own build of Perl. So I'm wondering what's the best way to install Git on this system and have both the docs and git-svn working correctly for each user. Unfortunately I'm a developer and not as good with system administration so take it easy on me.

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • What could cause TFTP reloaded Cisco `running-config` on 871 to fail?

    - by xtian
    Cisco CCP Write Configuration borked my 871w config while I was trying to setup port forwarding. I went through the basic steps to reconfig the router. I looked to see if I could just reset the router. Nope. I tested the 871's flash memory with fsck to see if there was hardware failure. Nope. Then I rewrote the minimal config for TFTP (which is the same for Cisco's CCP app.). Thne, I successfully uploaded a previously working running-config from Win Vista using SolarWinds TFTP Server, unfortunately the restore was not entirely successful. The old running config was saved to the 871's startup-config and I can login using console port. Some other things that are working are the hostname and welcome message but that's about it. Startup shows an error SETUP: new interface NVI0 placed in "shutdown" state after tftp. The missing light on the access point modem for ethernet link show the 871'a outside FE4 is not working. SO...what's the possible problem with reloading a previously working config (approximately 4 months with the same config) via TFTP? Is there something I can look for on the 871 to verify the config? Or on Vista to validate the config file itself before I transfer it? Or, is this there a common TFTP issue? UPDATE. I missed the instruction from Cisco's TFTP page to delete aaa lines from the config (although a video from a SuperUser user didn't make this point in his most excellent demo). There were several lines of this sort, I deleted them and uploaded again. However, they're back. I assume they're added automatically? [nope, see answer] UPDATE 2. The reload of previous settings was successful, but this error remains. I don't even know now if it was there before or not. It seems irrelevant to the question.

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • IIS 6 302, 401 Error

    - by lvandiest
    I'm having some problems accessing an ASP.NET website hosted on an internal iis 6 server that I am maintaining. Some users can get to the site, others (including myself can't). The app has Windows Authentication mode set in the web.config file, and Integrated Windows Authentication checked in the Website properties. Anonymous access is not checked. In the IIS logs, I see 2 lines when I make a request for the site's default page (Default.aspx). The first is a 401.2 error, and the 2nd is a 302.0 error. I've tried switching around as many security settings as I can think of, but had no luck yet. Can someone please help? I'm mainly a programmer, but have done a little IIS administration, so it is probably something quite simple I am missing. -- here are the log entries for my request to Default.aspx 2011-01-11 21:17:35 10.100.1.6 GET /MonthEndInventory/Default.aspx - 80 - 10.100.1.111 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+OfficeLiveConnector.1.4;+OfficeLivePatch.1.3;+.NET+CLR+1.1.4322;+Tablet+PC+2.0;+.NET4.0C;+.NET4.0E;+InfoPath.3;+MS-RTC+LM+8) 401 2 2148074254 2011-01-11 21:17:35 10.100.1.6 GET /MonthEndInventory/Default.aspx - 80 DOMAIN\myuserid 10.100.1.111 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+OfficeLiveConnector.1.4;+OfficeLivePatch.1.3;+.NET+CLR+1.1.4322;+Tablet+PC+2.0;+.NET4.0C;+.NET4.0E;+InfoPath.3;+MS-RTC+LM+8) 302 0 0

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >