Search Results

Search found 56730 results on 2270 pages for 'evan plaicewww developerit com'.

Page 1961/2270 | < Previous Page | 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968  | Next Page >

  • 550 Requested action not taken: mailbox unavailable

    - by Porch
    I setup a small box with Server 2003 64bit to be used as a webserver and email server for a small school. Real simple stuff for a few users. A simple website and a handful of emails. rDNS and spf records setup and pass every test I found including test at dnsstuff.com. Email sending to almost every email address (google, hotmail, aol, whatever) works. However, with one domain, I get an bounce back with the error. 550 Requested action not taken: mailbox unavailable It's another school running Exchange judging from some packet sniffing with WireShark. Every email on this domain I have tried sending to gives this error. The email address is valid as I can send to it from my personal, and gmail account without a problem. Does anyone know of some anti-spam software that gives an 550 error like the above? What else could this be? Thanks for any suggestions. Packet capture of the two servers communicating look like this. 220 <server snip> Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 ready at Sat, 2 Oct 2010 12:48:17 -0700 EHLO <email snip> 250-<server snip> Hello [<ip snip>] 250-TURN 250-SIZE 250-ETRN 250-XXXXXXXXXX 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-XXXXXXXX 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XXXXXXX 250 OK MAIL FROM: <email snip> 250 2.1.0 <email snip>....Sender OK RCPT TO:<email snip> 250 2.1.5 <email snip> DATA 354 Start mail input; end with <CRLF>.<CRLF> <email body here> . 550 Requested action not taken: mailbox unavailable QUIT 221 Goodbye

    Read the article

  • Using GUI ftp on Win7 and Vista without additional software

    - by Stephen Jones
    Goal: provide a 'no-software' method for 'less technical' users to access password protect ftp location from Win7 and Vista (existing approach for WinXP works). 'No software' method to mean without installing additional software (e.g. FileZilla, WinSCP) - the solution is supplied to external non-technical users. WinXP (works): Using Windows Explorer, WinXP supports non-technical ftp access by pasting: ftp://username:password@server.com into the address bar. The remote ftp site's files / directory structure becomes available and can be copied to / from easily (in the style of local file copy / paste) by a 'less technical' user. Win7 / Vista (doesn't work): Pasting the same URL into the Windows Explorer on Win7 or Vista causes an error: An error occurred opening that folder on the FTP server. Make sure you have permission to access that folder. Details: The connection with the server was reset. Notes: a) The same username/password/server typed from the (DOS) command line achieves access to the server, but this is a more 'technical' solution than desired. I am looking for a WinXP equivalent solution. b) Under 'Control Panel' / 'Internet options' / 'Advanced' tab - the boxes for 'Enable FTP folder view' and 'Use Passive FTP' are ticked (enabled) c) Adding an inbound firewall rule for local port 20 (TCP) was attempted with no difference in results (i.e. failure)

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • Macbook Pro - 15" with i7 processor - Any problems with heat?

    - by webworm
    You may have already heard about the review done by the folks at PC Authority in Australia, where they had an i7 MacBook Pro that got up to 100 degrees Celsius during benchmarking. Here is the URL in case you have not read it. http://www.pcauthority.com.au/News/172791,macbook-pro-helps-core-i7-hit-100-degrees.aspx In any case, I was considering purchasing a 15" Macbook Pro with the i7 processor and the NVIDIA GeForce GT330M with 512 video memory. Having read how hot the computer got I started to become hesitant about purchasing. My main concern is long term damage to the computer due to excessive heat. I plan to use the MacBook Pro as a development machine where I will be running Windows 7 within VMWare Fusion or Virtual Box. Within the VM I will be running IIS, SQL Server, Visual Studio and SharePoint Server. Hence why I would like to have the power of the i7 processor. That is why I wanted to check with actually owners of the MacBooks with the i7 processor and see what their experiences have been. Have you noticed excessive heat? How does your Macbook handle process intensive apps over long periods of time? Thank you!

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • RPM issues after signing JDK 1.6 64-bit

    - by organicveggie
    I'm trying to sign the Java JDK 1.6u21 64-bit RPM on CentOS 5.5 for use with Spacewalk and I'm running into problems. It seems to sign okay, but then when I check the signature it seems to be missing the key I just used to sign it. Yet RPM shows the key in it's list... # rpm --addsign jdk-6u21-linux-amd64.rpm Enter pass phrase: Pass phrase is good. jdk-6u21-linux-amd64.rpm: gpg: WARNING: standard input reopened gpg: WARNING: standard input reopened # rpm --checksig -v jdk-6u21-linux-amd64.rpm jdk-6u21-linux-amd64.rpm: Header V3 DSA signature: NOKEY, key ID ecfd98a5 MD5 digest: OK (650e0961e20d4a44169b68e8f4a1691b) V3 DSA signature: OK, key ID ecfd98a5 Yet I have the key imported (edited for privacy): # rpm -qa gpg-pubkey* |grep ecfd98a5 gpg-pubkey-ecfd98a5-4caa4a4c # rpm -qi gpg-pubkey-ecfd98a5-4caa4a4c Name : gpg-pubkey Relocations: (not relocatable) Version : ecfd98a5 Vendor: (none) Release : 4caa4a4c Build Date: Mon 04 Oct 2010 10:20:49 PM CDT Install Date: Mon 04 Oct 2010 10:20:49 PM CDT Build Host: localhost Group : Public Keys Source RPM: (none) Size : 0 License: pubkey Signature : (none) Summary : gpg(FirstName LastName <[email protected]>) Description : -----BEGIN PGP PUBLIC KEY BLOCK----- Version: rpm-4.4.2.3 (NSS-3) ...key goes here... =gKjN-----END PGP PUBLIC KEY BLOCK----- And I'm definitely running a 64-bit version of CentOS: # uname -a Linux spacewalk.mycompany.corp 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux Without a valid signature, Spacewalk refuses to install the RPM unless I completely disable signature checking. I have tried this with two different keys and two different users on the same machine without any success. Any bright ideas?

    Read the article

  • Linux DHCPD Mac-Address based Groups

    - by GruffTech
    Our Current DHCPD.conf looks like the following. subnet 10.0.32.0 netmask 255.255.255.0 { range 10.0.32.100 10.0.32.254; option subnet-mask 255.255.255.0; option broadcast-address 10.0.32.255; option domain-name-servers 208.67.222.222,208.67.220.220; option routers 10.0.32.5; host Dev-ABaird-W { hardware ethernet 00:1D:09:3E:49:13; fixed-address 10.0.32.94; } ... more static hosts .... } About as basic as it gets. The old router is 10.0.32.1, our company wanted to implement a squid proxy to better monitor web traffic while at work, and if necessary block large time-wasters, IE Facebook.com. However, we've quickly realized that this change has played a mean prank on our Polycom SIP Phones. Occasionally our phones will not ring, the end recipient hears ringing (this is artificially created by our PBX) however the handset never rings. The ONLY thing that has changed in our network is the option routers line. So, Since all Polycom MAC addresses begin with 00:04:F2 would it be possible in DHCP to say any 00:04:F2:::* MAC addresses get option routers 10.0.32.1, and anything else must talk with our Gateway?

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Enabling CURL on Ubuntu 11.10

    - by Afsheen Khosravian
    I have installed curl: sudo apt-get install curl libcurl3 libcurl3-dev php5-curl and I have updated my php.ini file to include(I also tried .so): extension=php_curl.dll To test if curl is working I created a file called testCurl.php which contains the following: <?php echo ‘<pre>’; var_dump(curl_version()); echo ‘</pre>’; ?> When I navigate to localhost/testCurl.php I get an error: HTTP Error 500 Heres a snippet from the error log: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/php_curl.dll' - /usr/lib/php5/20090626+lfs/php_curl.dll: cannot op$ PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/sqlite.so' - /usr/lib/php5/20090626+lfs/sqlite.so: cannot open sha$ [Sun Dec 25 12:10:17 2011] [notice] Apache/2.2.20 (Ubuntu) PHP/5.3.6-13ubuntu3.3 with Suhosin-Patch configured -- resuming normal operations [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/` Can anyone help me to get curl working? The problem was with the original test code. I used a new test file containing this and curl is now working: <?php ## Test if cURL is working ## ## SCRIPT BY WWW.WEBUNE.COM (please do not remove)## echo '<pre>'; var_dump(curl_version()); echo '</pre>'; ?>

    Read the article

  • Dual Monitor setup issues between laptop and external led monitor

    - by Julian
    I have two challenges. Monitor will not connect to laptop through HDMI Watching HD video content causes the laptop to sometimes turn off expecially when I'm streaming from tekzilla.com Setup. I got my new HP 2311x LED LCD monitor this week and I have it running as the main monitor extended by the 15 inch screen on my Dell Studio 1558. Right now I have to connect the external monitor through VGA. For the HDMI connection issue. I suspect that either the appropriate drivers are not installed because I don't see any hdmi device in the device manager. I've checked and I don't see any hdmi specific drivers listed online. For the shut down issue, I suspect the laptop might be overheating. Not sure why it would. It never did that while I watched movies on my laptop's default screen. My Laptop Configuration: 15" led lcd screen at 1366 x 768 intel i5 processor integrated graphics card 4 GB DDR3 RAM 500 GB hard drive I've tried everything from switching the source on my monitor to hdmi to start up combinations and nothing has worked. What could be the issue and how do I solve it?

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • How to fix a bootable USB Kubuntu installation when the drive has maxed out?

    - by NoCatharsis
    I used Universal-USB-Installer-v1.5.1 from PenDriveLinux.com with Kubuntu 10.04 so I could set up my 4GB flash drive as a totally independent installation. Unfortunately, there was an OS upgrade available which Kubuntu downloaded and attempted to install. This, along with some other software, apparently maxed out my drive before I realized it. Now when I try to boot from the drive, everything boots as normal to the OS boot screen where I select "Boot from this Kubuntu USB Installation." The startup process initiates, then stalls about halfway through and hangs indefinitely. I'm guessing the drive is trying to use space it doesn't have and completely stops working. I realize that once the OS upgrade is in place, the old files could be deleted for a potential 700MB space gain. However, I just have no way to get into the OS and complete the upgrade. My main OS is Windows 7. Is there a way I can fix this issue from within Windows without formatting the entire drive and reinstalling Kubuntu from scratch?

    Read the article

  • flowchart for debugging a slow/unresponsive server

    - by davidosomething
    So the server is slow: Roll back to the previous known working build - Success? Code problem - Fail? Go on. Ping ip address - Success? maybe a DNS problem, go on. - Fail? Server or connection problem, go on. Ping and tracert your domain.com from inside your network - previous success - fail: DNS problem - success? go on. - previous fail and: - Fail? Go on, could be you or network. - Success? Go on. Try it from outside your network (http://centralops.net/co/) - Fail? The server's network connection sucks. - Success? If inside network was fail, your network sucks. Check the server load: CPU/RAM usage. Is it overloaded? - Yes. Who's the culprit? Kill some processes/reboot. - No? Go on. what other steps should i add?

    Read the article

  • How to route public static IP to a virtual machine on a vmware ESXi host?

    - by Kevin Southworth
    I have 5 static IPs from my ISP (Comcast) and I have a physical machine with VMware ESXi 4.0 on it that is hosting multiple virtual machines. Right now I am just using the default vmware virtual network (vswitch0) with DHCP from the Comcast IP Gateway Router and everything is working fine. Each virtual machine can access the internet, etc. One of my virtual machines is a webserver (Windows Server 2008) and I want to assign it to 1 of my 5 static IPs so it's accessible from the public internet, while leaving the other VMs on the internal LAN still using DHCP. If I just plug my laptop directly into the Comcast IP Gateway (it has 4 ports on the back) and assign my laptop a Static IP using the windows networking dialogs, then I can hit my laptop from the public internet and it works great. However, if I try to do the same steps to set a static IP config on my Windows Server 2008 VM, it does not work. The VM cannot access the internet (open Firefox and try to visit google.com), and I cannnot see the VM from the public internet either. I'm assuming I'm missing something in the ESXi config somewhere, but I'm pretty new to ESXi and I'm not sure how to configure it to work this way.

    Read the article

  • TV won't go into standby

    - by Robert
    When I select Start-Turn off computer-Standby the 'turn off computer' option window closes, and then nothing else happens. I can start new applications, and Windows acts like I never selected standby. I ran it for several hours after that. If I have a TV program scheduled to record when I select standby I get a window (the Pinnacle TV software) asking if I'm sure, there are programs scheduled to record - and the computer just keeps running after I select yes, never going into standby. I added that detail as it shows the standby process is starting. [This problem also happens if a TV program is not scheduled, so the scheduler task in not running/in memory. This problem happens regardless of whether I'm not watching TV. This problem happens regardless of whether Media Center is running (it usually isn't, I'm using Pinnacle to watch TV).] I looked at "How to troubleshoot hibernation and standby issues in Windows XP" http://support.microsoft.com/kb/907477 - ACPI is enabled, and "standby" is an option in "Power Options Properties." So it appears to be setup correctly. Windows XP SP3 Media Center Edition, all current updates installed.

    Read the article

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

  • Outlook sends uninvited images as attachments

    - by serhio
    Outlook is always sending along 2 pictures (image001.png and image002.gif) with my mails. Even if I don't attach any images. How do I fix this? Thanks. EDIT - SIGNATURE The HTML of my signature: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML xmlns:o = "urn:schemas-microsoft-com:office:office"><HEAD><TITLE>Company default Signature</TITLE> <META content="text/html; charset=windows-1252" http-equiv=Content-Type> <META name=GENERATOR content="MSHTML 8.00.6001.18854"></HEAD> <BODY> <DIV align=left><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"> <P class=MsoNormal align=left><BR>Cordialement,&nbsp;</SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"><o:p>&nbsp;</o:p></SPAN></FONT></P> <P class=MsoNormal><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">Firstname LASTNAME<BR></SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">COMPANY Name<o:p></o:p></SPAN></FONT></P></DIV></BODY></HTML> EDIT 2 - STATIONERY I don't use stationery format: EDIT 3 - HTML If I delete the signature (Ctrl+A, Del) in the HTML mode the images always appear. If I use the signature in text only mode, the images disappears...

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • Office 2007 constantly crashes, logged as Event ID 1000

    - by Nori
    I have a user, who despite my best efforts, is having constant Office 2007 crashes. I've tried deleting their profile and setting it up again, repairing office, uninstalling completely and then reinstalling, and swapping out memory sticks. One event log error I keep getting is the following: (note all the Office errors are event id 1000) Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: EMSMDB32.DLL, version: 12.0.6539.5000, time stamp: 0x4c1246f8 Exception code: 0xc0000005 Fault offset: 0x0005d8e2 Faulting process id: 0xf6c Faulting application start time: 0x01cb6633f33384f3 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\EMSMDB32.DLL Report Id: 0d4a2eab-d231-11df-80a0-4061868f5d10 I also get this: Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9 Exception code: 0xc0000005 Fault offset: 0x002357a9 Faulting process id: 0x5e4 Faulting application start time: 0x01cb661f4546aa77 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\olmapi32.dll Report Id: a4a90658-d224-11df-80a0-4061868f5d10 The Excel error is this: Faulting application name: EXCEL.EXE, version: 12.0.6535.5002, time stamp: 0x4bd2a7f1 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdbdf Exception code: 0xe06d7363 Fault offset: 0x0000b727 Faulting process id: 0x14a8 Faulting application start time: 0x01cb61ab7bc0abab Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE Faulting module path: C:\Windows\syswow64\KERNELBASE.dll Report Id: ba0c454b-cd9e-11df-80a0-4061868f5d10 Also have gotten this for PowerPoint: Faulting application name: POWERPNT.EXE, version: 12.0.6500.5000, time stamp: 0x49a68f9d Faulting module name: COMShim.dll, version: 2010.3.325.110, time stamp: 0x4c51e0b1 Exception code: 0x40000015 Fault offset: 0x0001e388 Faulting process id: 0x1480 Faulting application start time: 0x01cb5fe9a0660e81 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\POWERPNT.EXE Faulting module path: C:\Program Files (x86)\FactSet\COMShim.dll Report Id: e03d2a21-cbdc-11df-9bc8-4061868f5d10 (Some of the above lines edited to keep you from scroll horizontally.) Lastly, I get this error several times a day, I don't think it is related but maybe it is: Failed extract of third-party root list from auto update cab at: http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab with error: A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Any ideas? This is driving me nuts.

    Read the article

  • Why Ubuntu could treat hosts file so strange?

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.myviacube.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

< Previous Page | 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968  | Next Page >