Search Results

Search found 42982 results on 1720 pages for 'web clips'.

Page 1527/1720 | < Previous Page | 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534  | Next Page >

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • QMail do not delivery to remote mailboxes for my own domain

    - by lorenzo-s
    Sorry for the title, I don't know how to sum up this situation. I have a web server at mydomain.com, running qmail for website related mail delivery (i.e. newsletter, sign up confirmation, etc). qmail here is used only to send mails, because I have a fully working Google App Gmail associated with mydomain.com for normal email receiving. qmail runs fine when sending email to remote addresses, for example to [email protected], but fails when sending to [email protected]. I think it's because the server thinks that he have to manage mailboxes for mydomain.com locally, instead of redirect them to Gmail. Here is the /var/log/qmail/current for two email: the first one is sent without problems to example.com, second one fails because it's for mydomain.com: 2012-11-15 15:04:11.551933500 new msg 262580 2012-11-15 15:04:11.551936500 info msg 262580: bytes 5604 from <[email protected]> qp 5185 uid 33 2012-11-15 15:04:11.575910500 starting delivery 316: msg 262580 to remote [email protected] 2012-11-15 15:04:11.575912500 status: local 0/10 remote 1/20 2012-11-15 15:04:12.189828500 delivery 316: success: 74.125.136.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1352991894_j49si13055539eep.9/ 2012-11-15 15:04:12.189830500 status: local 0/10 remote 0/20 2012-11-15 15:04:12.189831500 end msg 262580 2012-11-15 16:49:20.270332500 new msg 262580 2012-11-15 16:49:20.270336500 info msg 262580: bytes 2192 from <[email protected]> qp 5479 uid 33 2012-11-15 16:49:20.315125500 starting delivery 323: msg 262580 to local [email protected] 2012-11-15 16:49:20.315128500 status: local 1/10 remote 0/20 2012-11-15 16:49:20.320855500 delivery 323: failure: Sorry,_no_mailbox_here_by_that_name._(#5.1.1)/ 2012-11-15 16:49:20.320858500 status: local 0/10 remote 0/20 2012-11-15 16:49:20.372911500 bounce msg 262580 qp 5484 2012-11-15 16:49:20.372914500 end msg 262580 As you can see, it says: Sorry,_no_mailbox_here_by_that_name I can't say he's wrong :) How to solve this? How to let Google App Gmail manage incoming email for mydomain.com for messages sent by mydomain.com qmail server?

    Read the article

  • mysql thread count

    - by Ryan M.
    We have a web application that uses apache and mysql. Generally (according to Munin) our MySQL thread count sits between 2 and 4 at all times. The other day, our server almost came to a halt. HTTP requests were slow or wouldn't go through at all, SSH would work, but would take 30+ seconds to register keystrokes, etc.. So we pull up Munin and the only thing that's out of normal boundaries is the Mysql thread count. CPU usage was under 1%, load was under 1.0, plenty of available RAM. As mentioned before, the thread count floats around 2 to 4. At the time of our slow downs it had spiked to 14. So I start poking around the Internet and I see that in most cases, you'll start to see a higher thread count when you start running into slow queries. If I understand it correctly, the request comes in that takes a while to process, in the mean time other requests are coming in, so a new thread will be created to work on the request (yes?). But at the time of the slow down, we had 0 slow queries. My question is: What else can cause mysql to create additional threads. And would this sudden spike in threads possibly cause the server to slow down? To fix the issue, we restarted apache and everything went back to it's beautiful, normal self. Considering the the Server Vitals (CPU, RAM, Network, etc) were all ideal, and the thread count was the only thing out of place, this seems like the most logical thing to pursue as the possible cause. If it matters, we're on Mysql 5.1.40. Server is FreeBSD 7.2 and the server in question is inside a jail.

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Quickly set up a Windows Server and automatically install and configure software

    - by Chris
    Yesterday I spent far too much time downloading and installing software on Windows Server 2008. I only had to install a simple server for SQL Server 2008 Express using Microsoft's Web Platform Installer, then configure it to enable remote connections. Everything had to be attended, wasting my time. On a Linux system, this would be trivial to automate, but this is Windows. I do this very rarely, but in the future I would like to make this take as little time as possible. I could do a disc image with everything I installed and configured, but is there a better way? I know nothing of advanced deployment techniques on Windows. Ideally I would like to be able to remotely re-install the OS, or have an unattended install (which I know is possible). Any tips to make the software I need easier to and install and configure with minimal interaction necessary would be helpful. I don't expect everything I asked for to be possible and easy to do. Basically, If any part of it can be done quicker or at least without user input, that's what I'm looking for.

    Read the article

  • Very Slow DSL (ethernet) speed

    - by Abhijit
    I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • Windows Server 2008 - inexplicable system time jumps/glitches/inaccuracies

    - by Nathan Ridley
    I'm running a production web server on Windows Server 2008. On this server I have a database which logs certain user actions, but every now and again I inexplicably get database entries which, according to the record ID and the records immediately before and after, have the wrong time logged against them (7 days+ too old). For example, record ID 1001 will be for Dec 7, 11pm, 1002 will be for Dec 7, 11:01pm, then 1003 will be for Nov 28, 1:38am, then the next will be back on track again. The problem seems to occur in random records (or 2-3 records in a row) and crops up once every few days. This is absolutely baffling because there is only one place in the application that assigns this date/time value and it's simply the system UTC date. I have been synchronizing the system time to time-a.nist.gov (which I read in another article was a bit more reliable than the default time.windows.com) and it seems to occasionally get out of time anyway (3-4 minutes), but I'm speculating that occasionally the time server has a temporary glitch where the date changes to a drastically wrong value for a short space of time, then changes back. Either that, or the motherboard clock battery is screwed and the reason the time momentarily changes is that the motherboard loses the time and then the time synchronization puts it back again. Could either of my suspicions be right? Should I turn off time synchronization for a production server? Assigning dates to an event log where the dates are up to 2 weeks prior to the actual date is a severe problem I can't have when the next version of my application is released. Any suggestions or advice would be appreciated.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Recover hard disk from Raw format

    - by user1632736
    I have been all over the web today with no results. So my drive was encrypted (truecrypt) the whole drive where windows resided. I decided to partition it to install W8 and forgot it was encrypted. So the drive got damaged and not accessible. When connected to a computer it asks for formatting. Somehow I enabled the drive through TrueCrypt on another computer and I could see and get all the files. Then I decided to decrypt the drive thinking that everything would be back to normal. After decryption my drive is not NTFS it is in RAW format. I am trying every possible way to recover, and I am desperate enough to ask lol. I tried: ddrescue (linux) (not mountable, no signature, ntfsfix no good) testdisk (linux and windows) Sees the partitions but cant do anything Many recovery applications. etc etc. I read in different places that doing a quickformat to NTFS and then doing a data recovery might help. I would definitely like a second opinion. Any suggestion would be really helpful

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • Trouble printing to local printer when connected to VPN with split-tunneling enabled

    - by Marve
    I'm a volunteer network admin for a multi-tenant non-profit office space. One of our new tenants uses a VPN to connect to remote resources using RRAS and Small Business Server 2008. They also have a local network printer for the workstations in our office. When connected to the VPN, they cannot print to the local printer. I informed their network admin that they need to enable split-tunneling to fix this. Their network admin enabled split-tunneling, but apparently printing still didn't work. He told me that I need to open port 1723 on our office firewall to allow it to work. I'm just a novice administrator and not familiar with RRAS, but this doesn't sound right to me and I haven't been able to find anything on the web to validate it. Additionally, my understanding of split-tunneling is that it is handled entirely by the VPN client and should work irrespective of firewall settings. Is my understanding of the situation incorrect? What steps should I take to resolve this problem?

    Read the article

  • Forward secrecy in Nginx (CentOS6)

    - by Anil
    I am trying to enable Forward secrecy in CentOS with nginx webserver. What I have tried I have read some tutorials and seems like we should have nginx, openssl latest versions to enable it. So I had installed the openssl latest from source. sudo wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz sudo tar -xvzf openssl-1.0.1e.tar.gz cd openssl-1.0.1e sudo ./config --prefix=/usr/local sudo make sudo make install Now OpenSSL supports the Eliptic Curve ciphers(ECDHE). I tested this with openssl s_server also. It worked well. Next, I replaced Nginx with latest. sudo wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.4.2-1.el6.ngx.x86_64.rpm sudo rpm -e nginx sudo rpm -ivh nginx-1.4.2-1.el6.ngx.x86_64.rpm and configured Nginx as described in this link ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+RC4:EDH+aRSA:EECDH:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS; http://baudehlo.wordpress.com/2013/06/24/setting-up-perfect-forward-secrecy-for-nginx-or-stud/ But now Nginx does not support ECDHE ciphers. It supports DHE ciphers. I tried by just enabling ECDHE cipher in nginx still doesn't work. I am using latest web browser(chrome 29 and it support this cipher) Am i missing anything ? Or Having issues with CentOS or Nginx? I read somewhere that ECC patent issues with CentOS, is this causing problem?

    Read the article

  • Do eSATA HDD docking stations have a capacity limit?

    - by Michael Kjörling
    I'm looking at perhaps buying an eSATA docking station to be able to easily plug in and unplug hard disk drives, particularly but not necessarily only for backup purposes. Note: This is not a hardware shopping recommendation question. Please don't vote to close it as such. Looking at different models, I find for example this page detailing the Deltaco SI-7908SUS which specifically states "storage capacity: 1.5 TB" as well as "pictured hard disk not included, only for illustration". A customer review specifically mentions that it does not work with 3 TB drives, although does not go into any detail such as OS, drive model, etc. From a brief glance, the vendor's web site does not appear to say either way. Then there is the quite similar Deltaco SI-7908B3 which boasts on the box "all 2.5" and 3.5" HDD/SSD compatible". My question is: Why would what basically amounts to a SATA/eSATA adapter have any say in what storage capacity devices are supported? Does it? Assuming the OS supports the full capacity of the drive, why should introducing another (not even a different, really) connector change anything? Bonus question: Might it make a difference if the docking station exposes multiple interfaces (such as in the case of for example the SI-7908SUS exposing USB 2.0 and eSATA)? (I still think it shouldn't, but it'd be nice to have it confirmed.)

    Read the article

  • server dosnt produce syn-ack

    - by steve
    I have a small program that take packets from the nfqueue . change the ip.dst to my server dst (and ttl), recalc checksum and return the packet to the nfqueue. The server and the client are linux and apache web server is run on the server and listen on port 80. i open telnet in the client to fake ip on port 80 . the packet is changed by my program and sent to the server, but the target server (the new dst ip) get the syn , but dosnt generate syn-ack (the server also belong to me , so i can see that it get the syn with checksum correct , but dosnt generate syn-ack). if i do the same , but with the real server ip as the dest, the tcp handshake is done correct (in this case i just change the ttl and checksum. The change that i did to the ttl is just a test to see that my checksum calc is ok). i compare the sys's , but didnt find and difference. Any idea? Ps. i saw this topic : Server not sending a SYN/ACK packet in response to a SYN packet and i set all flags the same , but this didnt help. Thank you

    Read the article

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Linux wireless disconnect every 20 minutes

    - by james
    My laptop uses CentOS 6.3 with kernel 2.6.32-279.el6.x86_64. My wireless adaptor is Intel Corporation Centrino Wireless-N 1000. My wireless connection always get off after about 20 minutes. The network applet shows the connection is still on with good signal strength, but I just cannot load any web pages even the configuration page of the wireless router. The problem will continue until I disable and reconnect the wireless. Other devices like my cell phone uses the same wireless network without the problem. Even yesterday I'm using the same laptop with Fedora 17 without this problem. I also searched the internet and someone said running services NetworkManager and network simultaneously may be a problem. But I cannot stop any one of them because: if I stop network and start NetworkManager, the network service will start automatically; if I stop NetworkManager and run network, it says "Device does not seem to be present, delaying initialization." when trying to bringing on the wireless. What shall I do to get rid of the problem? Thank you very much!

    Read the article

  • Getting 0xc00000e windows 7 "Boot device inaccessible" after random crashes

    - by Dynde
    I've been having some weird random crashes that I can't seem to locate, and I'm unsure if it's windows or hardware related. It's a brand new computer and very powerful. I've run into a couple of these random crashes, now I don't know what causes them, as it happens during the night, when I'm sleeping. When I wake up, all I see is a boot manager screen that says Exception: 0xc00000e "Boot device inaccessible". A simple restart doesn't fix the problem - it seems to struggle locating my primary hdd - but a complete shutdown works, it'll just fly straight into windows again. The event viewer doesn't tell me much. The most reason incident just gives me this: "The previous system shutdown at 08:55:44 on ?11-?12-?2011 was unexpected." And also a kernel power event: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. and I can see only two application event entries around that time at 8.47 (about 8 minutes prior to the crash): The Windows Modules Installer service entered the running state. The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. Can anyone tell me anything about this, or direct me to a forum or something that might know what's wrong? I can supply the extra details of the events too if needed. The hdd is an SSD - could that have anything to do with it? I ran a few diagnostics and memory and hdd should be okay - at least the diagnostics report is clean. Is it a faulty drive?

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • AJP Connector Apache-Tomcat with php and java application

    - by Safari
    I have a question about proxy and ajp module. On my machine I have a Apache web server and a Tomcat servlet container. On Tomcat is running a my java webapplication. On Apache I have some services and I can call these in this way: http://myhos/service1 http://myhos/service2 http://myhos/service3 I would configurate a ajp connector to call my tomcat webapplication from Apache. I would somethin as http://myhost to call the Tomcat webapp. So, I configurated my apache in this way..and I have what I wanted: I can use http://myHost to visualize the Tomcat webApp by Apache. <VirtualHost *:80> ProxyRequests off ProxyPreserveHost On ServerAlias myserveralias ErrorLog logs/error.log CustomLog logs/access.log common <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /server-status ! ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID nofailover=Off maxattempts=1 <Proxy balancer://mycluster> BalancerMember ajp://myIp:8009 min=10 max=100 route=portale loadfactor=1 ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from localhost </Location> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </VirtualHost> But, now I can't use the apache services: If I use http://myhos/service1 I have an error because apache try to search service1 on my tomcatWebApp. Is there a way to fix it?

    Read the article

  • Windows Server 2008 R2 running at a snail's pace

    - by Django Reinhardt
    Really weird problem here. Our main web server has started running at a snail's pace, for absolutely no reason we can discern. Even after restarting the machine, when there's no little or no ram usage and CPU usage is fluctuating between 0 and 30%, simple tasks, like opening Internet Explorer, or waiting for My Computer to open, take forever. There are no processes hogging system resources that we can see... the machine itself is just exhibiting extremely slow behaviour. I've never seen a machine do this. A lot of security updates had built up, so we decided to let Windows install them. When we looked through the history upon restarting, though, they had failed with error code 800706BA. I don't know if this could be related or not. Any help in this matter would be greatly appreciated. As mentioned in the title, we're running a Windows Server 2008 R2 machine. It's also running SQL Server and IIS. It has 16GB of RAM and a decent Quad Core processor. It's also been fine until now -- and we haven't changed a thing. Thanks for any help.

    Read the article

< Previous Page | 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534  | Next Page >