Search Results

Search found 16784 results on 672 pages for 'white box'.

Page 530/672 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Unable to access my IIS website using hostname. Works fine with localhost

    - by rajugaadu
    I am unable to access my IIS website or even the default website. I did a bit of research and checked/selected the option 'Integrate Windows Authentication' in the Properties > Directory Service tab. From then on I could access the website using localhost. But when I use my hostname, it asks for domain username/password. Why is it so? I don't understand why am I not able to access my website without checking this option to integrate windows authentication? My goal is to access the website using both localhost and hostname. More details on what I did: I haven't done anything out of world. What I did is: IIS - Websites - Create new Website - Create a working folder - Set a default page. I restart this website and then click on browse. And I do not see my default page. I had to go to Directory Service tab and select the check box "Integrate Windows Authentication". Then I can see the default page coming. On IE too I can see the default page coming when I use http://localhost. But when I use http://{hostname} it asks for domain username and password. Why???

    Read the article

  • VMware ARP/Mac Networking

    - by Ross Wilson
    Hi Guys, I am very interested in how VMware networking works. I have scoured the VMware website and read their data sheets, this has given me some basic knowledge. I now have some questions. Lets assume that we have a physical server running the VMware hypervisor. The physical server is running a Virtual Machine. The physical box has one physical NIC. The NIC is connected to a switch, as so is a desktop client. Now, this is where my first question lies. The VM has an IP address: 192.168.1.1. How do desktop clients on the network communicate with this VM? So, the client pings 192.168.1.1. The ping packet is sent to the switch. The switch checks its MAC address table and sees that 192.168.1.1 is associated with the MAC address of the physical NIC. Correct? I then assume that the ping packet is sent to the server's physical NIC, where the hypervisor routes the packet to the VM thats using 192.168.1.1? Please could you give me a run down as to how VM networking works? Many thanks, Ross

    Read the article

  • apache name virtual host - two domains and SSL

    - by Tom
    I'm trying to setup Apache(2.2.3) to run two websites with SSL using both different domains and IP addresses. Both websites run fine on port 80 but when I tried to enable SSL for website2 I get a ssl_error_bad_cert_domain error; website2 picks up the SSL cert for website1. Here is my setup in httpd.conf: # Website1 NameVirtualHost 192.168.10.1:80 <VirtualHost 192.168.10.1:80> DocumentRoot /var/www/html ServerName www.website1.org </VirtualHost> NameVirtualHost 192.168.10.1:443 <VirtualHost 192.168.10.1:443> SSLEngine On SSLCertificateFile conf/ssl/website1.cer SSLCertificateKeyFile conf/ssl/website1.key </VirtualHost> # Website2 NameVirtualHost 192.168.10.2:80 <VirtualHost 192.168.10.2:80> DocumentRoot /var/www/html/chart ServerName www.website2.org </VirtualHost> NameVirtualHost 192.168.10.2:443 <VirtualHost 192.168.10.2:443> SSLEngine On SSLCertificateFile conf/ssl/website2.cer SSLCertificateKeyFile conf/ssl/website2.key </VirtualHost> Update: In answer to Shane (this wouldn't fit in comment box) here is the output from apachectl -S: VirtualHost configuration: 192.168.10.2:80 is a NameVirtualHost default server www.website2.org (/etc/httpd/conf/httpd.conf:1033) port 80 namevhost www.website2.org (/etc/httpd/conf/httpd.conf:1033) 192.168.10.2:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) 192.168.10.1:80 is a NameVirtualHost default server www.website1.org (/etc/httpd/conf/httpd.conf:1017) port 80 namevhost www.website1.org (/etc/httpd/conf/httpd.conf:1017) 192.168.10.1:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) wildcard NameVirtualHosts and _default_ servers: _default_:443 192.168.10.1 (/etc/httpd/conf.d/ssl.conf:81) Syntax OK

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS. [migrated]

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Windows AD DNS: Event ID 5504

    - by Chris_K
    Two of my AD controllers (both running DNS service) appear to be having a similar issue. Both are throwing lots of events in the DNS events that look like this: Event Type: Information Event Source: DNS Event Category: None Event ID: 5504 Date: 5/24/2010 Time: 11:51:38 AM User: N/A Computer: ALPHA Description: The DNS server encountered an invalid domain name in a packet from 76.74.137.6. The packet will be rejected. The event data contains the DNS packet. That will come with the same event, same time, with a packet from 76.74.137.7 as well. I know this is "Information" not an error, but since it is new and different it bothers me (yes, I fear unexplained change!) Both machines are running Windows 2003 R2 SP2. The DNS servers are not exposed to the internet. Both DNS servers are configured to use OpenDNS for Forwarders. For both servers, this started about a week ago. Any thoughts on: 1) should I be concerned? 2) how can I stop/fix this? To keep it interesting, I have a 3rd AD / DNS box. Same domain, different Active Directory site. Same forwarders, yet doesn't have this issue.

    Read the article

  • Neophyte question about using Subtotal and CountIf in Excel

    - by Andrew
    Hi, I'm using Excel and having some problems with Countif and I don't understand how it works differently from SubTotal. I used the GUI to subtotal stuff and all the subtotals are right. Then I attempted to use the Countif to see how many requirements passed. That worked for the first subtotal only. It's easy to see why. When I look at the box for the subtotal, it says: =SUBTOTAL(3,C286:C292) When I look at my formula for passed requirements, I have: =IF(ISTEXT(A285),COUNTIF(C286:C338,"=Passed"),"") Notice that the last column is wrong. How did the Subtotal manage to keep this correct? I typed in the formula for passed requirements and dragged it down the page. Everything behaved as expected (even the bit about ISTEXT dutifully figured out which row was which), but it got the last row wrong. Any ideas? SRS Maintenance Count 7 44 SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Reports Count 12 43 SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC network access on)

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); }

    Read the article

  • Why doesn't apache2 consistently load template fragments from memcached?

    - by Hobhouse
    I run a webserver on an ubuntu box in the rackspacecloud with django 1.0x, apache2/WSGI and memcached 1.2.2. Some of my templates make use of template fragment caching: {% load cache %} {% cache 604800 keyname %} <!-- cache: {% now "H:i, j. b" %} --> {{ my_content }} {% endcache %} When I reload apache2 everything is fine. If keyname is not set, my_content is generated and keyname is set in memcached. After that, my_content is served from memcached. My problem is that after some hours (notably less time than 604800 seconds ), apache2 seems to stop talking to memcached, and my_content is generated from scratch everytime. When this happens I can still set and get keys from memcached from my python shell. Memcached also has more than enough memory to store keys. But to get apache2 to start talking to memcached again I have to restart apache2, and then it will once again start to get the now several hours old keys from memcached. What can be the reason for this behaviour, and how do I fix it?

    Read the article

  • What are the disadvantages of domain email forwarding?

    - by naivedeveloper
    I have a domain, example.com. My domain registrar gives me two options concerning email. Set up forwarding email addresses (e.g., [email protected] forwarded to [email protected]. Set up Google Apps for email management Thus far, I have gone with option 1. I have a generic GMail email, [email protected], and I subsequently set up various email addresses on my registrar to forward to this gmail address: [email protected] -> [email protected] [email protected] -> [email protected] [email protected] -> [email protected] Through the GMail account, I have the option to alias these addresses when sending email. For example, from [email protected], I can "send email as" [email protected]. That way from the vantage point of the receiver of the email, the email came from [email protected] as opposed to [email protected]. My question is: Are there any disadvantages of this approach? Are these emails more susceptible to being picked up by spam filters vs using the Google Apps approach? Is there any hidden indication that the email is being aliased? When viewing the email headers, it shows the email was sent from [email protected] and not [email protected] or "forwarded from [email protected]" or anything like that. Am I naive in assuming that my cheap approach to email is masked by aliasing my outgoing emails? I have chosen approach number 1 simply because of the ease of setup. With that said, are there any advantages of going with approach 2 (the Google Apps approach)? Thanks for suggestions and advice.

    Read the article

  • Determining a realistic measure of requests per second for a web server

    - by Don
    I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking 1 second to return. Oddly, the machine itself wasn't even breathing hard. I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see 5000 requests per second, which is more in-line with what I expected from the machine. But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong? Forgive me if this is network engineering 101 stuff. I'm a developer by trade. Update: Edited for clarity.

    Read the article

  • Using a Linksys Wireless- G Broadband Router for a Wireless Antenna/Adapter?

    - by Alex
    (Warning: I'm not too computer-smart). Just moved fm a home where I used ATT DSL to a home w/ Comcast Cable for internet. Old home had the DSL ethernet wired to my only desktop w/ a 2Wire router for my wireless signal for the laptops. In the new home the signal comes fm cable via a Netgear(300) wireless router (remotely located) & works fine for my laptops. After searching with the network software, my desktop (can't be ethernet wired cuz of location) detects no wireless signal. Desktop is a 2 yr old HP p6311f Pavilion (Windows 7). Can't seem to detect any wireless hardware (ant/adaptr). Maybe I don't have the ability & need to buy a USB wireless antenna? Would the Pavilion come w/ wireless capability out of the box, maybe something inside the tower? No antenna on back. I happen to have a Linksys Wireless router which I plugged in to the desktop (trying both internet & shared ports) & noticed signal action on the router front panel. No internet on desktop though. Can I use this as an antenna for the desktop? Thanks & sorry if my solution is an easy one I'm just missing. Just want internet on the desktop.

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Run command remotely on Windows computer

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: The remote computer is not part of my domain, hence no Kerberos The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • VPN Trunk Between Cisco ASA 5520 and DrayTek Vigor 2930

    - by David Heggie
    I'm a bit of a VPN newbie, so please go easy on me ... I'm trying to use the VPN trunking capabilities of the DrayTek Vigor 2930 firewall to bond two IPSec VPN connections to a Cisco ASA 5520 device and I'm getting myself tied in knots and hope someone here with more knowledge / experience can help. I have a remote site with two ADSL connections and the DrayTek box. The main office site has the Cisco ASA device. I am able to setup a single IPSec connection between the two sites on either of the ADSL connections' public IP addresses, but as soon as I try to use the VPN bonding, nothing works. The VPN tunnels are both still up, but the traffic is getting lost somewhere. I suspect it's due to the ASA not knowing how to route the traffic back over the VPN - one minute, traffic from my remote office's network is coming from public ip address #1, the next it's coming from public address #2 and it doesn't know what to do. Well, that's my newbie impression of what's going wrong, but I don't really know: If this is really what's happening If what I'm trying to do (bond two VPN connections from a single remote network to improve the bandwidth / resiliency) is possible with the kit I've got Could anyone help?

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Resolution Chart Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • SSH client and Command Prompt replacements Windows look-and-feel

    - by Oddthinking
    The Problem I've worked exclusively in Windows. I can handle that. I've worked exclusively in DOS (a long time ago!). I can handle that. I've worked exclusively in Unix. I can handle that. Right now, I am developing a command-line (python) application on a Windows machine, testing it in a DOS box (i.e. Windows' Command prompt), and then deploying it to Linux, and running it with PuTTY. I cannot handle that. My productivity drops dramatically when CTRL-C cuts in one window (Windows) and kills the process in another (DOS, Linux). My productivity drops dramatically when Enter copies the selection in one window (DOS), and deletes the selection in another (Windows), and runs the current half-edited command in the third (PuTTY). My productivity drops dramatically when I cannot hit Undo, Home or End. The Solution I am Seeking An SSH/Bash command-line client that runs on Windows and, to the extent possible, uses all the standard Windows shortcuts (Cut, Copy, Paste, Undo, Home, End, Insert, Shift-Arrows, etc.) work on a bash command line. Bonus points if it puts the cursor between letters, rather than on them. Plus, an equivalent DOS command-line drop-in that runs on Windows, and provides the same interface. I appreciate there may need to be special buttons to actually transfer CTRL codes (like CTRL-C) through in the cases I need them. I suspect the SSH client will need to be specific to a shell (so it knows when it is at the command prompt, and when it is inside a running app.) I know there are many SSH clients, but I am looking for advice for a particular need. PuTTY feels like an escape route for Unix programmers stuck on Windows. I am the opposite. Can anyone recommend one (or maybe a combination of an SSH client and an Command-Line replacement)?

    Read the article

  • Prevent Roaming profiles from syncing certain elements

    - by user29919
    Hello everyone, I'm somewhat new to the Server 2008 front, and I'm afraid I've hit my first snag: I've set up roaming profiles, and they appear to be working too well. Is there a way to limit, ideally on a folder/object basis, what gets synced with a roaming profile? What I'm trying to do is: 1) stop my roaming profile from syncing desktop layout - I run a dual-screen desktop and a laptop, and it's really annoying to have to reposition everything after logging onto the laptop, because it forces everything onto one screen. 2) stop it from syncing registry variables - specifically, I want Visual Studio to load different setting files on each computer. Currently, the variable that contains that path is getting synced whenever I log in, so I get the settings from whatever box I last logged out from. 3) stop syncing the start menu - this one's not as big, but I'm noticing 'program not found' icons even for programs that are installed. they work when I click them - they just look ugly. I'm running Windows SBS 2008 x64 with two Win7 clients (x86 Pro, and X64 Ultimate). Is there a simple way to do that? Or am I trying to work too much against what roaming profiles are designed for? I could, of course, set up different profiles for the desktop and laptop, but that seems to defeat the point of roaming profiles entirely... Thanks in advance! Any help will be much appreciated =)

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • compile kernel 2.6.34 for Ubuntu Lucid for xen dom0 / pvops

    - by andreash
    Hi there, I'd like to compile a recent Linux kernel (2.6.34) for my Ubuntu 10.04 Lucid Lynx AMD64 box, mainly because I'd like to use it as a dom0 kernel with the recent xen4. There's plenty documentation on the web about how to compile a kernel 'Debian style'. But what I think would be nice to start with an 'official' Ubuntu config to be sure not to miss any important things and having to recompile over and over again. So what I'd like to do is compile 2.6.34, but starting with the 'official' /boot/config-2.6.32-XX from Ubuntu Lucid. The question is: How do I best do that? If I just take the config from 2.6.32, the new features from 2.6.33/34 won't be in the config. So what I'd like to do is somehow the 2.6.34 config with the original 2.6.32 one from Ubuntu. How can I best do that? Does it even make sense? Is there easier ways to achieve what I want? Thanks for your insight! A. PS: I just found a linux-image-2.6.32-bpo.4-xen-amd64 package on backports.org, but no information about it. Would it work as a dom0 kernel on Lucid?

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >