Search Results

Search found 21263 results on 851 pages for 'website deployment'.

Page 736/851 | < Previous Page | 732 733 734 735 736 737 738 739 740 741 742 743  | Next Page >

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • Kunagi LDAP configuration problems

    - by Willem de Vries
    We recently started with Scrum at our company and we wanted to start using Kunagi to test and see how it works. So I installed the kunagi_0.23.2.deb packet that I downloaded from their website, on my Ubuntu 11.04 running in tomcat6 using openjdk-6-jre. everything works fine except I can't get the LDAP to work. I have one AD server and one LDAP at my disposal for testing. For the LDAP I use the following info: -uri: ldap://192.168.1.11:389 -user: some_tested_user -passwd: the_pass -DN: dc=colosa,dc=net -LDAP Filter: (&(objectClass=user)) I tested various LDAP Filters, I don't know if I have the right one. However I get an erro when clicking "test LDAP". The error refers to the DN: Server service call error Calling service TestLdap failed. java.lang.RuntimeException: InvalidNameException: [LDAP: error code 34 - invalid DN] With the AD server I get no error while testing, yet I am not able to login I get: "Login faild" every time. I don't know if this is because of the LDAP Filter I entered, yet I can't get it to work. I have read this http://kunagi.org/iss652.html stating that I need to create my accounts inside Kunagi before I can login. So I did this with no effect. So basically my question is, what causes this DN string error (I am sure mine is right), and what LDAP Filter should i use? Any help would be highly appreciated.

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • Change the Default Date setting in Word 2010

    - by Chris
    I am using Word 2010 and Windows 7. You know how when you start typing a date in Word it will automatically suggest what it thinks you want? Like if I start typing “6/29”, a little grey bubble will display “6/29/13 (Press ENTER to Insert)”. How do I get it so the bubble will display the year in a 4 digit format, such as "6/29/2013 (Press Enter to Insert)"? The below picture is how it looks when typing a date into Word. I have already gone to the Date & Time option under the Insert menu and the date format that I want is already selected. I think this is only for using quickparts anyway, so the date automatically updates when you open a document. The Region and Language settings under the Control Panel are correct as well. I thought at one point I found it somewhere under options, but I am sure I looked through everything many times and I can’t find it. I posted this exact question at the Microsoft website and someone replied: Go to the Windows Control Panel and click on Clock, Language and Region and then on Change the date, time, or number format and then modify the Short date format so that it is what you want to be used. So please don't suggest this again, because in my question I did say that I already tried this and it doesn't work, at least not for Word, in this situation. Thanks.

    Read the article

  • OpenSSL response 404 issue on centOS 6

    - by dsp_099
    I followed this tutorial (though it's for 5.2, I figured I'd be alright). The changes I had to make that seemed to have worked: Rename ca.csr to ca.cslr (that's the one the command generated) List it in the ssl.conf as ca.cslr instead of ca.csr I have the following in the httpd.conf <VirtualHost *:80> DocumentRoot /etc/test ServerName site.com </VirtualHost> <VirtualHost *:433> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /etc/test> AllowOverride All </Directory> DocumentRoot /etc/test ServerName cryptokings.com </VirtualHost> /test contains a folder inside of it, accessible via http://site.com/test/foo, however attempting to access it via https://site.com/test/foo results in warning that the certificate is untrusted (self-signed, no biggie) a 404 error. Chrome's complains about the certificate are the following: The identity of this website has not been verified. • Server's certificate does not match the URL. • Server's certificate is not trusted. I think those warnings are a side-effect of a self-signed certificate - or is the first one something that needs to be addressed? I seem to be able fetch the root page via https just fine though, it shows a standard CentOS setup page. (That said, I haven't added a VirtualHost entry for it so I suppose that makes sense) I think I've made a mistake somewhere during the setup as I'm not too familiar with the process. During setup, I was prompted for a type of password that would be required when apache restarts but running service httpd restart does not seem to prompt me for one. Any help would be appreciated.

    Read the article

  • AjaxControlToolkit JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • Attempts at NIC teaming on Server 2008 R2 with PRO/1000 MT

    - by Klaus
    I have a Dell PowerEdge 1850 server and a gigabit switch that supports nic teaming (and was configured to do so). The server has a total of four Intel PRO/1000 MT ports, which also support teaming. But.. for some reason Intel does not actually have a version of the drivers/ProSet that will work for these cards on 2008 R2. You have to use the built-in drivers that come with 2008 R2, which do not support the additional features. According to their website, they have no plans to change this. Strangely enough, I experimented with various drivers in an attempt to force it to work. At one point, the teaming was working, but there were side effects (such as the DNS server refusing to start). So now I am back to running just one of the cards, (very) frustrated about the whole situation. I have looked all over to see if there is some way around this, but have not had any success. I know I can probably just get a new network adapter for it, but with the good deal I got, that would cost more than the server! :) While staying with 2008 R2, does anyone know of any possible alternatives? Thanks!

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • HP LaserJet 2550 has a carousel motor error

    - by Arlen Beiler
    I have a LaserJet 2550, and it's worked pretty good for a long time (except for some slowness a while back, spooling I think), but just recently it suddenly quit working. We moved this summer, but left it at our other place, and just recently when my Dad went over there to try to print something out, it didn't work. When you turn it on, you hear the fan give a false start (basically a quick pulse), and the carousel goes through its usual thing. Then it starts up in earnest like it's getting ready to print something. All of a sudden it just stops. Everything stops, and the three lower lights are steady. When I push the Go button, the Go light (bottom of the 3) turns off, but the other two stay on. I looked it up on the HP website and it says it is a carousel motor problem. I called HP, but they said it is out of warranty. I've opened the cover and held the switch with a screw driver so I could watch it, and it goes through its thing like I described (doesn't seem to make a difference whether the imaging drum is in or not), then when it stops it kind of seems to jump back a little bit (the carousel). I hope this all makes sense (I know you like details), and hopefully you also know what to do to fix it. Thanks.

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80? help!

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): [code] auto lo iface lo inet loopback iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] [/code] /etc/apache2/ports.conf: [code] Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 Listen 443 [/code] /etc/apache2/sites-enabled/mysite.conf: [code] ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf [/conf] [/code] Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: [code] An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) [/code] My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • redirect from mysite.com to www.mysite.com

    - by jml
    hi there, i know that this has been answered many many times, so if someone wants to point me to another thread that answers my question specifically, that is fine... for right now, my searches aren't yielding many results. so i have a website like mysite.com that has a flash swf embedded in it and i go to www.mysite.com ... all of the sudden, things don't work properly. i would like to get to the bottom of this, because it's not like the page just "doesn't load" at all; it loads and i can only do certain things; as if certain functionality is disabled (might be url requests for specific urls etc). do i need to manage this in my control panel? i wouldn't assume so, because the site loads; just has a crippled functionality from within the swf. i was thinking it might have more to do with my crossdomain.xml file; could this be the case? thanks for any tips or suggestions.

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • Problem configuring Apache/Wordpress on subdomain

    - by friism
    I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/. The frontpage of the blog (http://blog.folketsting.dk) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo. I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk. Looking at the response headers at http://blog.folketsting.dk, it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo. I'm thinking it's a bad config in Apache... any clues?

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • How do I upgrade the BIOS to boot the motherboard when the CPU is not suported?

    - by Matt
    So I have a Tyan S8225 motherboard with a Valencia (Opteron 4200) CPU. Two of us have tried everything. We have even swapped the whole motherboard, Power supply, memory, even a different CPU (still Valencia). We even tried it without any memory installed and with only a single CPU. There are no beep codes as I suspect even those are controlled by the BIOS boot process. We put the whole thing on the bench with just the power supply, VGA, keyboard and network (for IPMI) connected. The IPMI is not even showing the BIOS starting, but the IPMI is working. After some hunting around, I discovered the claim on the website is that the Valencia CPU's are not supported on older BIOS revisions. For a start, I don't know what the bios revision is and if it's older but it's the only thing left. Could the BIOS be causing a board not to boot at all? If that's the case, then is there any other way to update the BIOS without buying an old CPU only to be put back in a box just to update the BIOS? Yes, we even tried updating the BIOS through IPMI but you can't do that either.

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Windows XP - Power surge on hub port

    - by Swift-Tuttle
    Since last few weeks I constantly get this error, as status bar balloon: Power Surge on Hub Port - A USB device has exceeded the power limits of its hub port. Due to this now I am unable to access any USB devices properly, they get disconnected intermittently. I did quite a few things to resolve this problem, firstly obviously through the Windows help. I even tried all the things told on the Microsoft website(which essentially says is to check and update the driver) but in vain. One suggestion, I found when I google'd was to disable the USB2 controller through the Device Manager and since at every startup the System configuration comes up complaining that it has been changed etc.(On that same site it is mentioned to ignore this message.) But after everything I still cant solve this problem. Any help is much appreciated. The system is installed with Windows XP service Pack 3 and all the updates till last month. Please let me know if any other hardware info is required. **UPDATE** My laptop is about 5 years old now, its an HP with Celeron 1.4G processor. Windows XP SP3 installed. All latest windows updates installed. 2 USB ports available. BIOS is HP 68DTD ver F.0A Do I need to update my BIOS from somewhere ? or is this a hardware problem altogether?

    Read the article

  • How can a Postfix/Dovecot(ssl)/Apache/Roundcube(non-ssl) setup leak email addresses?

    - by Jens Björnhager
    I have a linux box email server with Postfix as the MTA, Dovecot as the IMAP server and Apache with Roundcube as webmail. In my /etc/postfix/aliases I have just above a hundred different aliases which makes as many email addresses on my domain. I use one address per website so I easily can shut down spam infested addresses. During the half a year or so that I have had this setup, I have received 3 spam from 2 sources. As I know exactly where I entered this address, it should be easy to pinpoint email leaking websites and services. However, these sources are, according to me, not likely email sellers. And for one of them to sell my email twice? I contacted one of the sources and they are adamant that their system is tight. They suggested the possibility that it is my server that is doing the leaking. So, my question is: How likely is it that my box is leaking email addresses, and how? I don't store fully qualified email addresses anywhere in my system except in my maildir. I use SSL connection to IMAP I do not use https on webmail

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Installation Error on Windows Vista: "Side-by-Side configuration is incorrect"

    - by Maxim Z.
    NOTE: This is not a dupe of this other question. That question refers to a similar problem with 2 programs, while I'm only having it with 1, so the solution there doesn't apply to my situation. My relative asked me to install H&R Block 2009 on his Windows Vista 32-bit computer. I ran the installation program, which succeeded, but when I try to open the application itself, it gives me the following error: The application failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Here are the steps I've done so far to try and remedy this problem: In elevated command prompt, run the command: sfc /scannow Uninstall H&R Block 2009 Uninstall Microsoft Visual C++ 2005 Redistributable Reinstall Microsoft Visual C++ 2005 Redistributable by downloading from MSFT website Reinstall H&R Block 2009 This didn't fix it. I've searched for a long time and haven't found anything that works. The H&R Block site itself states that the way to fix this problem is to uninstall and reinstall H&R Block 2009. Has anyone run into this issue before? If so, how can I fix it? Thanks in advance.

    Read the article

  • WS 2008 R2 giving "Internal Server Error"

    - by dragon112
    I have had this problem for a while now and can't find the problem at all. When i open a page it will sometimes give a 500 Internal Server Error message. This hapens on a website that works perfectly but when i try to upload anything it will give this message(all php settings have been set to either 1gb or 3000 seconds as well as the iis headers). Also when i open a simple page which does nothing more than include another php page and include a couple of classes the error will occur. I have no idea what causes this error and would love to hear from any of you on what this could be. I checked the server logs and for the upload issue i found this error: The description for Event ID 1 from source named cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found the message resource is present but the message is not found in the string/message table Regards, Dragon

    Read the article

< Previous Page | 732 733 734 735 736 737 738 739 740 741 742 743  | Next Page >