Search Results

Search found 5711 results on 229 pages for 'jim m somewhere'.

Page 165/229 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Windows xp recovery console without Ntfs.sys? (0x00000024 BSOD)

    - by Kalle
    I have two physical disks in a computer, for simlicity lets call them C and D. C: got Windows XP and D: got some data. The problem is that whenever i have D: connected i can't boot windows. I get some BSOD called 0x00000024/NTFS_FILE_SYSTEM. Same thing if i boot up windows with D: disconnected and then connect it once windows has loaded. The KB article about this problem says that i have to run chkdsk but i can't get to somewhere where i can run this because i get a BSOD whenever the disk is connected! Even the recovery-console BSODs if D: is connected. The final option in the KB is to boot the computer on Windows 2000 Setup disks where you edit some file to manually disable the ntfs.sys driver and then run chkdsk. The problem is that i don't have any floppy drive. Is there any way to boot the built in recovery console with ntfs.sys disabled or to burn the floppy version to a cd after you've extracted and modified it on the harddrive? Right now the Windows xp bootable floppy creator(2) is asking me which floppy drive to extract to which i can't answer because i have none :/ Other solutions to the root problem is also appreciated :) (2) ht tp://www.microsoft.com/downloads/details.aspx?FamilyID=55820edb-5039-4955-bcb7-4fed408ea73f&displaylang=en

    Read the article

  • Postfix not working

    - by user1488723
    A while ago I installed the postfix mail server on my ubuntu 10.04 VPS. At the time it was working good but now it's just stopped working. I was trying to enable SASL authentification and somewhere it must have went really wrong. I've studied the postfix main.cf and done everything in an orderly fashion to ensure that it is nothing wrong. I also have Dovecot installed and configured dovecot.conf to run with Postfix. If I try to do telnet localhost 25 while logged in on the server I just get: Connection closed by foreign host. If I try to do telnet mail.example.com 25 "from the outside" I get: telnet: Unable to connect to remote host: No route to host And when I check the server log after the failed attempts I see this: Jun 28 15:49:31 msv postfix/smtpd[11839]: initializing the server-side TLS engine Jun 28 15:49:31 msv postfix/smtpd[11839]: connect from localhost.localdomain[127.0.0.1] Jun 28 15:49:31 msv postfix/smtpd[11839]: warning: SASL: Connect to /var/spool/postfix/private/auth failed: Connection refused Jun 28 15:49:31 msv postfix/smtpd[11839]: fatal: no SASL authentication mechanisms Jun 28 15:49:32 msv postfix/master[11598]: warning: process /usr/lib/postfix/smtpd pid 11839 exit status 1 Jun 28 15:49:32 msv postfix/master[11598]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling main.cf file looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no delay_warning_time = 4h myhostname = mail.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydomain = example.com myorigin = $mydomain mydestination = $mydomain relayhost = mynetworks = 127.0.0.1 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_loglevel = 2 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = /var/spool/postfix/private/auth smtpd_sasl_security_options = noanonymous Dovecot.conf file looks like this: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/mail mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Postfix able to receive email but not able to send it

    - by c0mrade
    I had postfix running on my machine(comes with centos minimal), but today I configured it to use my domain for the sake of example this is my domain name example.com . Here is my config : alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 header_checks = regexp:/etc/postfix/header_checks html_directory = no inet_interfaces = all inet_protocols = ipv4 mail_owner = postfix mailbox_size_limit = 1073741824 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 10485760 mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = example.com myhostname = mail.example.com mynetworks = 127.0.0.0/8 mynetworks_style = host myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = smtp.$mydomain sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = permit_mynetworks,reject_unknown_client,permit smtpd_recipient_restrictions = permit_mynetworks,permit_auth_destination,permit_sasl_authenticated,reject unknown_local_recipient_reject_code = 550 I need one email account to be able to send emails (password retrievals etc.). I read today somewhere that if you create unix account postfix will recognize it as email address so if your account username was ant your email would be [email protected]. So I tested that and tried to send email to [email protected] and I successfully received mail. When I try to send the email with ant task script, I'm not able to connect : Failed messages: javax.mail.MessagingException: Could not connect to SMTP host: mail.example.com, port: 25; nested exception is: java.net.ConnectException: Connection timed out: connect What am I missing here? Edit I'm able to telnet to localhost : Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 mail.example.com ESMTP Postfix

    Read the article

  • Install IIS on Server 2003 unattended via PowerShell as a service user (no terminal session)

    - by maik
    I've been racking my brain with this for a bit and figured I would ask here to see if anyone could enlighten me. As the title says, I'm trying to install the IIS role on Server 2003 using an unattended install method launched via a service. We're using RightScale and most of what we want to accomplish is pretty straightforward. I created an unattend file for use with sysocmgr.exe: [Components] iis_common = ON iis_www = ON iis_www_vdir_scripts = ON iis_inetmgr = ON fp_extensions = ON iis_ftp = ON And I invoke it like so: sysocmgr.exe /i:%windir%\inf\sysoc.inf /u:C:\path\to\iis-unattend.txt /r /x /q If I run that from a command prompt while logged in as Administrator it works just fine, but if it runs via RightScript (the RightScale user on the server, which is a local admin) it fails somewhere in the middle and the logs I get are rather unhelpful. The thing is I can do this same thing with the SNMP Client (which is a Windows component, not a server role) and it works with no problems while run via the script service user. My best guess is that sysocmgr.exe is expecting a GUI element to be there during the role installation and since the service user has no terminal session it coughs and dies. That's just a wild stab in the dark.

    Read the article

  • How do I configure custom URL handlers on OS X?

    - by cwd
    I've been reading a lot online about custom URL handlers / custom protocol handlers such as: Launching External Applications using Custom Protocols under OSX OS X URL handler to open links to local files I get that you can tell the system that a particular program is able to handle a certain scheme / protocol with the Info.plist file: <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLName</key> <string>Local File</string> <key>CFBundleURLSchemes</key> <array> <string>local</string> </array> </dict> </array> <key>NSUIElement</key> <true/> But if there are multiple applications that are capable of opening the same URL handler, such as mailto: how do you specify which one you want the system to use? There were some references to utilities like the More Internet preference pane which no longer seems to be available from the author's site. I did find it online by Googling but it seems a bit shaky - like it was written for an older OSX - perhaps Tiger. I haven't been able to find information on how to set the URL handler for protocols and custom protocols. I'm assuming there is a plist file somewhere that I can edit - or maybe there is a newer, better utility that works well with Mountain Lion?

    Read the article

  • How Do I Install Latest Java Plugin on Ubuntu 8.04 LTS with Custom Firefox 3.6.3?

    - by Volomike
    I have Ubuntu 8.04 LTS and will need to remain on that for awhile as I am a programmer and cannot disturb what I'm doing in the middle of a project. As you know, it doesn't want to install the latest Firefox by default and keeps me in the stone age. So, I had to install my own Firefox. I did so and with relative ease I was able to install the Flash plugin using the normal 'ln -s' technique one usually does with /usr/lib/firefox/plugins. Now I need to do a Gotomypc.com task and it requires Java -- moan -- so I need to figure out how to install Java. I downloaded and installed the latest Java plugin in /usr/java, and I do see underneath that layer of folders a plugins folder with a .so file. So, I went to /usr/lib/firefox/plugins and did this: ln -s /usr/java/jre1.6.0_20/plugin/i386/ns7/libjavaplugin_oji.so Then, I also read on the web that one has to create a ~/.mozilla/plugins folder, cd into it, and then run the same ln -s command again. Another site recommended finding one's ~/.mozilla/firefox and renaming it to ~/.mozilla/firefox.old (after you backed up your bookmarks) and then launching firefox again so that it creates ~/.mozilla/firefox and uses the new Java plugin. Well, all these attempts have completely failed and it is incredibly frustrating. I do about:plugins to see my plugins and all I get are the Flash and the default null plugin. I do not see the Java one. Also, they said on my tools menu with Firefox 3.6.3 I would see a Java Console menu and I don't see that either. I found a pluginreg.dat somewhere deep under ~/.mozilla/firefox, but it does not list the Java plugin inside -- only Flash and the null plugin. Please help me install Java. I need to help a client out and need to connect to his PC remotely with gotomypc, which requires Java inside firefox.

    Read the article

  • Cacti is ignoring hash marks in interface aliases

    - by Matt Simmons
    I'm attempting to set up Cacti to monitor a router's interfaces, and I'm having trouble getting the graph templates to show the information that I'd like. Our interface configuration looks like this: interface GigabitEthernet3/6 description WalljackNumber # Server info no ip address no shutdown switchport switchport access vlan 116 switchport mode access ip dhcp snooping trust spanning-tree portfast The "Server Info" string is really just the machine name, and a short relevant description, such as "PolarSprings vmnic2". The important part appears to be that it follows the hashmark. When I run snmpwalk, I get the proper output: IF-MIB::ifAlias.230 = STRING: WalljackNumber # Server info But in Cacti, when I go into the graph templates and set the title to this: |host_description| - Traffic - |query_ifName| (|query_ifAlias|) All that shows up in the graph is: switchname - Traffic - Gi3/6 (WalljackNumber #) Which strikes me as a little weird. What I suppose MAY be happening is that somewhere in the cacti stream, it's interpreting # as being a comment and stripping everything after, but I'm not sure. I was hoping someone could tell me that this was a known documented behavior, or that I could change it in a setting that I wasn't aware of. The alternative answer is to change the delimiter from # to something else, but I've got over a thousand lit switchports on an old college infrastructure, and I'm not sure what else might be relying on them.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • Why do most songs in my media collection play twice? - Corrupt media?

    - by Dean
    Problem: Whether I'm playing the media with Rhythmbox on Ubuntu, Winamp on Windows, or my Nokia N95's media player, most of my audio files (OK, maybe only 40%) play twice. Info: I have a 500GB external 2.5" WD HDD, with a 150GB primary FAT32 partition labeled MUSIC. Inside this, I have about 500 folders containing about 10,000 MP3/WMA/M4A/WAV files. I manage the drive using Ubuntu 9.10, and frequently copy data to/from it using RSYNC, or on windows, TotalCopy. The visual output is different in each media player, but it behaves as if the 1 MP3 has the same song on it twice, and as soon as it ends it begins again. Winamp shows that the song goes for 2x as long as it should, The N95's media player shows the progress bar off the right-hand-side of the screen when it begins playing (then jumps back to the left, then continues along...). Rhythmbox doesn't show me how long the song is, nor does the progress bar move along the screen. Plea: It seams to me somewhere along the lines my collection has become corrupt... but where? And how? and please someone tell me I can fix it!! TIA, Dean.

    Read the article

  • Name one good reason for immediately failing on a SMTP 4xx code

    - by Avery Payne
    I'm really curious about this. The question (highlighed in bold): Can someone name ONE GOOD REASON to have their email server permanently set up to auto-fail/immediate-fail on 4xx codes? Because frankly, it sounds like "their" setups are broken out-of-the-box. SMTP is not Instant Messaging. Stop treating it like IRC or Jabber or MSN or insert-IM-technology-here. I don't know what possesses people to have the "IMMEDIATE DELIVERY OR FAIL" mentality with SMTP setups, but they need to stop doing that. It just plain breaks things. Every two or three years, I stumble into this. Someone, somewhere, has decided in their infinite wisdom that 4xx codes are immediate failures, and suddenly its OMGWTFBBQ THE INTARNETZ ARE BORKEN, HALP SKY IS FALLING instead of "oh, it'll re-attempt delivery in about 30 minutes". It amazes me how it suddenly becomes "my" problem that a message won't go through, because someone else misconfigured "their" SMTP service. IF there is a legitimate reason for having your server permanently set up in this manner, then the first good answer will get the check. IF there is no good reason (and I suspect there isn't), then the first good-sounding-if-still-logically-flawed answer will get the check.

    Read the article

  • Apache is spawning more and more processes!!

    - by erotsppa
    We have a LAMP setup that is working prety good for half a year. All of a sudden today the apache server (mysql servers are not on this box) started to die. It seems to have started to spawn more and more processes over time. Eventually it will consume all the memory and the server would just die. We are using prefork. In the mean time what we are doing is just added more ram and increased the MaxClients and ServerLimit parameter to 512. We're just prolonging the crash. The number still goes up slowly. Maybe in a day, it would reach that limit. What is going on? We only have around 15-20 request per second. We have 1Gb memory and it's not half used, there's no swapping going on. Why is apache creating more and more processes? It's almost like theres a leak somewhere! The database boxes are fine, they are not causing a delay to requests. We tested some queries everything is quick!

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • How do I install the latest version of packages in Ubuntu?

    - by Roman
    For example I want to install the latest version of "numpy". I type the following: "sudo apt-get install python-numpy". When I type this the first time it installs something and if I type this the second time it writes that I have already the latest version of numpy. However, I see that my version of numpy is 1.1.1. and I know that it NOT the latest version. Why it happens and how this problem can be solved? I can find the *tar.gz file with the latest version, I can extract files with the archive and than I need to rune one of the scripts which will be somewhere among the extracted files. But I do not like this way. It is too complicated. I do not know where I should put all these files, I do not know which dependencies I should install before I run the script for the installation of numpy, I do not know where numpy will be put after installation and so on. Is there an easy way to get the latest version of numpy?

    Read the article

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

  • CDROM does not appear on desktop, MACOS 10.5.7

    - by Cheeso
    When I pop a CDROM into the drive of my Macbook Pro, It spins up, I hear it, but no icon appears on the desktop. (I think it's 10.5.7; actually not sure how to verify this on Mac, but I think I saw a 10.5.7 flash by somewhere). In the finder preferences, I have "Show these items on the Desktop" set to show HDs, External Disks, and CDs, DVDs, and ipods. All three of those are checked. I do see the internal HD on the desktop. In Disk utility I can see the CD/DVD hardware. It says "MATSHITA DVD-R UJ-857E...". From Disk Utility I can eject the drive. But in Finder, there is never a CD/DVD listed under "Devices". When I insert a disk, nothing happens, I cannot see it. I also cannot boot from bootable CDROMs by holding C down . Suggestions? I am not very experienced with Mac; I have used Windows for years. EDIT Two updates: I saw this article on support.apple.com, and modified the hostconfig appropriately. It did not have the AUTODISKMOUNT entry, so I added one, rebooted. Same behavior. It does not see the CDROM in Finder, does not mount it on desktop. I put an old manufactured CDROM into the drive, and voila! it showed up on the desktop. The CD that does not appear is a GNome Partition Editor Live CD, which I guess is based on debian. That CD boots in other (non-Mac) PCs. I want to use this to adjust the Bootcamp partition. Suggestions?

    Read the article

  • Transferring 'Live' Documents to Another Computer

    - by waiwai933
    I was wondering if there was any OS/Application that has some support for transferring a document to another computer without having to save, transfer and then reopen. Basically, is there a way so that if I'm working on my desktop, I can click a button (or something similar) and then have the exact state of that computer/application transferred to another? For example, if I'm writing a document, is there a way to get it to computer B without saving it, putting the file on my flash drive, and having to reopen it? Edit: I just realized that this is possible through the wonderful phenomena known as cloud computing, but this is not the type of solution I'm looking for. Edit 2: I wanted to clarify: By 'save', I meant that I didn't want to have to save it to a special location, be that a (flash) drive or uploading to the web. Saving to the local hard drive is fine (and probably necessary, since technologies such as Bluetooth require the file to be saved somewhere). This is a bit inspired by a scene in Avatar, so I highly doubt that this actually exists... but if it does, I don't want to miss out.

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • What does this strange network/subnet mask mean?

    - by dunxd
    I'm configuring a new ASA 5505 for deployment as a VPN endpoint in a remote office. After configuring it and connecting the VPN, I get the following messages: WARNING: Pool (10.6.89.200) overlap with existing pool. ERROR: IP address,mask <10.10.0.0,93.137.70.9> doesn't pair 10.6.89.200 is the address I configured for the ASA. It has the subnet mask 255.255.255.0. The ip address 10.10.0.0 corresponds to one of our subnets, but it certainly wouldn't have a subnet mask of 93.137.70.9. That looks more like a public IP address (and resolves to an ADSL connection somewhere). I am sure if we had such a subnet configured, that it would indeed overlap with 10.6.89.200. There is no reference to 93.137.70.9 in the config of this ASA or our head office ASA. Can anyone shed light on what is going on here? The sudden appearance of a strange subnet mask is a bit alarming.

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >