Search Results

Search found 21742 results on 870 pages for 'google earth'.

Page 763/870 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Apache mod_rewrite and mod_vhost_alias Virtual Hosts and %1

    - by Matt Wall
    I have put the main bits of my httpd.conf down below. I am using %1 to get the host field so I can dynamically add vhosts by just creating dns/folders. One problem is I need to reference this: HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" In Apache when I try say to do this: http://test.domain.com/hds-vod/myfile.mp4.f4m it sees the %1 in the logs, and fails. Apache gives me this: [error] mod_jithttp [403]: No access to D:/Content/%1/DefaultContent/eve.mp4 What I'm looking for is the D:/Content/%1/DefaultContent/eve.mp4 to become D:/Content/test/DefaultContent/eve.mp4 Anyone have any useful resources / hints etc. to help me? Meanwhile my Google searching continues...! Listen 80 ServerName main1.rtmphost.com AccessFileName .htaccess ServerSignature On UseCanonicalName Off HostnameLookups Off Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 RewriteLogLevel 0 RewriteLog logs/rewrite.log DocumentRoot D:/Content LoadModule vhost_alias_module modules/mod_vhost_alias.so VirtualDocumentRoot "D:/Content/%1" RewriteEngine On <Directory /> Options None AllowOverride None Order allow,deny Allow from all Satisfy all </Directory> <IfModule f4fhttp_module> <Location /vod> HttpStreamingEnabled true HttpStreamingContentPath "D:/FMSApps/%1" Options FollowSymLinks </Location> Redirect 301 /live/events/livepkgr/events /hds-live/livepkgr <Location /hds-live> HttpStreamingEnabled true HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" HttpStreamingF4MMaxAge 2 HttpStreamingBootstrapMaxAge 2 HttpStreamingFragMaxAge -1 Options FollowSymLinks </Location> </IfModule>

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Weird routing issue

    - by Joel Coel
    I'm having some weird internet problems on campus. I know it's something simple, but it's a case where I need another set of eyes. I think I can explain the problem best by posting a tracert: Tracing route to google.com [74.125.45.147] over a maximum of 30 hops: 1 3 ms 3 ms 3 ms 192.168.8.1 2 1 ms 1 ms 1 ms elissaemily-pc.york.edu [192.168.10.5] 3 2 ms 2 ms 2 ms rrcs-76-79-19-33.west.biz.rr.com [76.79.19.33] 4 31 ms 3 ms 2 ms ge-1-1-0.lnclne00-mx41.neb.rr.com [76.85.220.109] 5 20 ms 17 ms 17 ms ge-7-3-0.chcgill3-rtr1.kc.rr.com [76.85.220.137] 6 20 ms 20 ms 19 ms ae-5-0.cr0.chi30.tbone.rr.com [66.109.6.112] 7 19 ms 19 ms 24 ms ae-1-0.pr0.chi10.tbone.rr.com [66.109.6.155] 8 26 ms 24 ms 24 ms 74.125.48.109 9 23 ms 24 ms 21 ms 216.239.46.246 10 39 ms 39 ms 55 ms 209.85.242.215 11 39 ms 39 ms 39 ms 209.85.254.243 12 39 ms 40 ms 96 ms 209.85.253.145 13 39 ms 39 ms 39 ms yx-in-f147.1e100.net [74.125.45.147] Trace complete. Note the second entry in there. Not only is the host name a student's computer, but the ip address doesn't exist. Dhcp shows that host as having a different address and you can't ping any 192.168.10.5. Yet somehow it's routing packets for us (and not very well, either — things are slow right now). The basic network routing table looks like this: Destination Subnet Mask Gateway --------------------------------------- Default Route -- 10.1.1.5 (our firewall) 10.0.0.0 255.0.0.0 -- 192.168.8.0 255.255.252.0 --

    Read the article

  • Browsing Playstation 3 from Ubuntu Box

    - by zfranciscus
    Hi, My Goal here is to be able to transfer files from my Ubuntu 10.04 box to my PS3 over a wireless network. My PS3 and my Ubuntu Box is on the same home network. These are some stuff that I tried: I can't see my PS 3 from 'Places ? Network' nor other computer that is running on windows. I am still puzzled by this fact. I have not tried ping-ing my PS3. I'll try that later today and post the result in the forum I install PS3MediaServer (http://code.google.com/p/ps3mediaserver/). Someone in the PS3 chat forum (http://www.ps3chat.com/playstation-3...lp-please.html) claims that it will enable your ubuntu box to browse PS3. So I downloaded PS3MediaServer, and ran the software. The status tab keep showing "Waiting ...". It seems that PS3MediaServer can't find my PS3. Some how I feel that 1 and 2 are related somehow.Perhaps that there is something wrong with my Ubuntu network settings that is preventing my ubuntu box from looking up other device on the network. I can go to the Internet fine, but I can't see other device on my network. Does anyone have any experience in this area. I would like to hear your experience and perhaps some solution. Any kind of hints or help will be greatly appreciated. Cheers

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • BOINC error code -1200 when opening...

    - by Erik Vold
    I installed boinc 6.10.21 on my macosx 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use boinc, and I said 'yes' to this. Then when I tried to open boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Sendmail Sends but never Delivers

    - by Jeremy
    I have tried 10 different emails hosted at Google, Yahoo!, GoDaddy, and some that are privately hosted, and each time I get the following errors. I have blocked sensitive information, but you will be able to see the errors. Feb 16 17:06:50 xxxxx sendmail[31824]: o1GM6ovJ031824: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30054, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GM6oJo031825 Message accepted for delivery) Feb 16 16:54:19 xxxxx sendmail[31625]: o1GLsJPP031625: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30097, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GLsJah031626 Message accepted for delivery) Feb 17 09:05:52 xxxxx sm-mta[10620]: o1H6Z3jM005734: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=07:30:49, xdelay=01:15:36, mailer=esmtp, pri=571331, relay=aspmx3.googlemail.com. [209.85.222.4], dsn=4.0.0, stat=Deferred: Connection timed out with aspmx3.googlemail.com. Feb 17 10:35:23 xxxxx sm-mta[12828]: o1HEZwn8011833: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:59:25, xdelay=00:12:36, mailer=esmtp, pri=300353, relay=aln-mailrelay.att.net. [12.102.252.75], dsn=4.0.0, stat=Deferred: Connection timed out with aln-mailrelay.att.net. If you take a look, they all send, but then (HOURS later) I get an error "stat=Deferred: Connection timed out with {server}". I'm at my wits end, because I use this same setup on each of my servers, and they all work.

    Read the article

  • Group Policy GPO not 'seen' at client

    - by fukawi2
    I have a new OU (natorg.local\NATO\Users) that I am trying to apply GP to. I have created a new user in this OU, and linked the 3 GPO's to this OU: DESKTOP - Folder Redirection (AppData) DESKTOP - Folder Redirection (Desktop) DESKTOP - Folder Redirection (Documents) Hopefully the names are sufficient to suggest what they do exactly. The settings are under User Settings so there is no Loopback processing required (if my understanding is correct). GP Modelling for the user and specific computer says that the GPOs will/should be applied, however on the client, gpresult doesn't even appear to see the GPOs under either "Applied" or "Not Applied": USER SETTINGS -------------- CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local Last time Group Policy was applied: 25/06/2012 at 11:07:13 AM Group Policy was applied from: svr-addc-01.natorg.local Group Policy slow link threshold: 500 kbps Applied Group Policy Objects ----------------------------- LAPTOPS - Power Settings WSUS - Set Server Address OUTLOOK - Auto Archive SECURITY - Lock Screen After Idle Default Domain Policy DESKTOP - Regional Settings NETWORK - Proxy Configuration NETWORK - IE General Config OFFICE - Trusted Locations OFFICE - Increase Privacy OUTLOOK - Disable Junk Filter DESKTOP - Disable Windows Error Reporting DESKTOP - Hide Language Bar NETWORK - Disable Skype DESKTOP - Disable Thumbs.db Creation WSUS - Set Server Address The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- Local Group Policy Filtering: Not Applied (Empty) NETWORK - Google Chrome Configuration Filtering: Not Applied (Empty) SYSTEM - Event Log Configuration Filtering: Not Applied (Empty) SECURITY - Local Administrator Password Filtering: Not Applied (Empty) NETWORK - Disable Windows Messenger Filtering: Not Applied (Empty) SECURITY - Audit Policy Filtering: Not Applied (Empty) WSUS - Automatic Install Filtering: Not Applied (Empty) NETWORK - Firewall Configuration Filtering: Not Applied (Empty) DESKTOP - Enable Offline Files Filtering: Not Applied (Empty) I haven't altered permissions on the GPO's at all, no WMI filtering... As I said, GP Modelling says that they should be applied. GPResult on the client correctly identifies itself as being the correct OU (CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local) There are 2 x 2008R2 and a 2003 DC, domain is 2003 level, client is Windows XP SP3. Can anyone suggest why these GP Objects would be "invisible" to the client?

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • Running the LibreOffice MSI installer in English

    - by Scott Severance
    I'm trying to install LibreOffice on a machine running a Korean version of Windows XP. I don't know Korean. I haven't used Windows with any frequency in many years, so I'm pretty lost. When I run the installer, it shows up in Korean. But, I want to customize the installation, so I need the installer to be in English. Googling took me to this page, where I found an example command to run the installer in Gaelic, which I modified for my system as follows: msiexec /i LibO_3.6.1_Win_x86_install_multi.msi TRANSFORMS=:1084 This works, except that I know less about Gaelic than I do about Korean. The help page provided a link to a page where I could look up the ID codes. From that page, I determined that the correct code was 1033 for US English and 2057 for UK English. When I substituted the code, I got an error message. Here's the messages as translated by Google, followed by the original: Transform can not be applied.Verify that the specified transform paths are valid. ?? ??? ??? ? ????. ??? ?? ??? ???? ?????. I can't very well search on a machine translation, so I don't know where to go from here.What is the problem? How can I make the installer operate in English? Alternatively, how can I change XP to display its interface in English, while keeping full functionality for typing in Korean?

    Read the article

  • HP c6180 driver refuses to install - "must reboot to continue delete files and continue install" loo

    - by Aszurom
    User called me last night, can't get HP drivers to install for printer. 500 meg download for latest c6180 driver. PC says "computer must be restarted to delete some files, then setup will continue." and it won't pass that point. (that's not exact phrasing of the error, and I'm not on the system now) Target machine is XP SP2. I upgraded to SP3. Fully patched. Ran malwarebytes, spybot, hijack this and turned off all startup entries. I noted that windows update also failed on optional printer driver updates for other printers installed on the machine. Went to printer server properties, removed all print drivers from system, deleted all printer ports. Made sure no services were marked disabled. After 5 hours of fighting this I started to get desperate, and uninstalled anything that referenced HP in the machine at all. Still no cure. I'm 6 hours into this and completely stumped. Google returns nothing applicable except unanswered requests for help on same topic.

    Read the article

  • saslauthd using too much memory

    - by Brian Armstrong
    Woke up today to see my site slow/unresponsive. Pulled up top and it looks like a ton of saslauthd processes have spun up using about 64m of RAM each, causing the machine to enter swap space. I've never seen this many used on there. top - 16:54:13 up 85 days, 11:48, 1 user, load average: 0.32, 0.50, 0.38 Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 97.3%id, 0.2%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 1048796k total, 1025904k used, 22892k free, 14032k buffers Swap: 2097144k total, 332460k used, 1764684k free, 194348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 848 admin 20 0 263m 115m 4840 S 0 11.3 5:02.91 ruby 906 admin 20 0 265m 113m 4828 S 0 11.1 5:37.24 ruby 30484 admin 20 0 248m 91m 4256 S 6 9.0 219:02.30 delayed_job 4075 root 20 0 160m 65m 952 S 0 6.4 0:24.22 saslauthd 4080 root 20 0 162m 64m 936 S 0 6.3 0:24.48 saslauthd 4079 root 20 0 162m 64m 936 S 0 6.3 0:24.70 saslauthd 4078 root 20 0 164m 63m 936 S 0 6.2 0:24.66 saslauthd 4077 root 20 0 163m 62m 936 S 0 6.1 0:24.66 saslauthd 3718 mysql 20 0 312m 52m 3588 S 1 5.1 3499:40 mysqld 699 root 20 0 72744 7640 2164 S 0 0.7 0:00.50 ruby 15701 postfix 20 0 106m 5712 4164 S 1 0.5 0:00.50 smtpd 15702 postfix 20 0 52444 3252 2452 S 1 0.3 0:00.06 cleanup 4062 postfix 20 0 41884 3104 1788 S 0 0.3 125:26.01 qmgr 15683 root 20 0 51504 2780 2180 S 0 0.3 0:00.04 sshd 14595 postfix 20 0 52308 2548 2304 S 1 0.2 0:24.60 proxymap 15483 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.38 smtp 15486 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.36 smtp 15488 postfix 20 0 43380 2540 1992 S 0 0.2 0:00.38 smtp 15485 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.36 smtp 15489 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.40 smtp Wasn't sure what Saslauthd is, Google says it handles plantext authentication. The machine has been sending a lot of email through postfix, so this could be related. Anyone know why so many may have spun up? Are they safe to kill? Thanks!

    Read the article

  • Can't ping a DNS zone on windows server 2008 R2

    - by Roberto Fernandes
    I´ve just configured a windows server 2008 r2, but got a lot of problems on DNS role. Let me talk about the server configuration: name: fdserver IP address: 192.168.0.10 I have a DNS zone called "fd.local". This is my domain and it´s working ok. I´ve created a zone called fdserver, and inside this zone a record (A) with "*" as a host. because this is a webserver, i´ve configured apache so if you enter something like "site.fdserver" it will point you to the "site" folder. This is working ok ONLY inside the server. This server is a DNS server too... and have 3 entries: 192.168.0.10 (his own IP), 8.8.8.8 and 8.8.4.4 (google public DNS). Now start the problems... Most of the computers on my network, CAN join the domain without problems. But just CAN'T ping "something.fdserver". Now comes the strange thing... If I remove the twoo secondary entries on my DNS server (8.8.8.8 and 8.8.4.4), it obvious stop accessing websites (like microsoft.com), but now the computer CAN ping "something.fdserver". I don´t know If I explained correctly... and my English is terrible... but inside the server is all working as it supposed to work. But in the workstation machines, it work only if I remove the secondary DNS!! If you need any details, just ask! thanks!

    Read the article

  • Godaddy domain and Bluehost web hosting

    - by Digital site
    I have a domain from Godaddy and web hosting at Bluehost. I want to make this work as some people say no need to transfer the domain from Godaddy to Bluehost. I was trying to find out how to get this work out by adding name servers for Bluehost ns1.bluehost.com ns2.bluehost.com at Godaddy. This works fine, but not sure if 100% OK yet. The reason why I say that is when I type in my address name on any browser this way: mydomain.com it doesn't work. Instead I get an error message stating that this server is not found or couldn't connect to it... However, when I write the domain name and include the www. prefix it works fine... The other problem is when I search in google or yahoo, the domain shows like this: mydomain.com , which is not really good because my clients think my site is down because of the error message, and most new people don't know if they have to add www. to the domain to work. I just want to make at least the domain works like this: mydomain.com

    Read the article

  • Installing Linux on an Asus p8z68-m PRO Motherboard

    - by Holland
    Here is a challenge: how is this done? I've tried disabling the ASM1061 controller in the Onboard Devices section, using Wubi, booting from USB (as I don't have a DVD drive, yet), and even booting from RAID/IDE (with AHCI as the default) to do this. Still, no dice. Google shows up virtually nothing related about Linux and this mobo, apart from a people just saying "disable ASMedia" (which, I assume is the ASM1061 controller, as that's all I see - apart from the USB 3.0, which I disabled already) and it hasn't really helped much. Thus, what is wrong here? Edit My problem is that I cannot boot Linux via USB or a simple Windows installer such as Wubi (for Ubuntu). I wind up getting error messages along the lines of write cache failed, along with many other cryptic error messages similar to the following: [ 1400.351374] sd 4:0:0:0: [sdb] Test WP failed, assume Write Enabled [ 1400.353433] sd 4:0:0:0: [sdb] Asking for cache data failed [ 1400.356601] sd 4:0:0:0: [sdb] Assuming drive cache: write through This seems to be common for Asus P8Z68-M Pro motherboards, with the only notable solution being to "disable ASMedia", which, as I said before, I'm guessing is the ASM1061 controller on the motherboard. Despite already disabling this, I have tried this with both Fedora and Ubuntu without any success. I need to know what I can do about this; has anyone ran into something similar or heard about this issue before? I know these motherboards are relatively new...

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >