Search Results

Search found 24734 results on 990 pages for 'floating point conversion'.

Page 724/990 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • Vyatta masquerade out bridge interface

    - by miquella
    We have set up a Vyatta Core 6.1 gateway on our network with three interfaces: eth0 - 1.1.1.1 - public gateway/router IP (to public upstream router) eth1 - 2.2.2.1/24 - public subnet (connected to a second firewall 2.2.2.2) eth2 - 10.10.0.1/24 - private subnet Our ISP provided the 1.1.1.1 address for us to use as our gateway. The 2.2.2.1 address is so the other firewall (2.2.2.2) can communicate to this gateway which then routes the traffic out through the eth0 interface. Here is our current configuration: interfaces { bridge br100 { address 2.2.2.1/24 } ethernet eth0 { address 1.1.1.1/30 vif 100 { bridge-group { bridge br100 } } } ethernet eth1 { bridge-group { bridge br100 } } ethernet eth2 { address 10.10.0.1/24 } loopback lo { } } service { nat { rule 100 { outbound-interface eth0 source { address 10.10.0.1/24 } type masquerade } } } With this configuration, it routes everything, but the source address after masquerading is 1.1.1.1, which is correct, because that's the interface it's bound to. But because of some of our requirements here, we need it to source from the 2.2.2.1 address instead (what's the point of paying for a class C public subnet if the only address we can send from is our gateway!?). I've tried binding to br100 instead of eth0, but it doesn't seem to route anything if I do that. I imagine I'm just missing something simple. Any thoughts?

    Read the article

  • SQL Server Analysis Services, DNS, AD, Kerberos, Connection Issues

    - by ScaleOvenStove
    Running into a very weird issue. Converting servers to Windows 2008/SQL 2008. Have a server, SERVER_A, brand new, setup with Win2k8,Sql2k8 - works. Have a Server SERVER_B, running Windows2003/SQL2005. I want to migrate from SERVER_B to SERVER_A. I have all db's, cubes, etc setup on SERVER_A and it is mimicking functionality. Since users are using Excel to connect to SSAS, they connection string has SERVER_B in it. What I want to do, is, change DNS on the network to point SERVER_B (by name) at the ip of SERVER_A. I have successfully done this with another server, SERVER_C, but I need to do it with SERVER_B. What I have found is that with SERVER_C, after changing DNS, had to remove SERVER_C from AD and then it worked. I could connect to SERVER_C (DB), SERVER_C (SSAS Default Instance) and SERVER_C (SSAS Named instance) and it all was actually connecting to SERVER_A I tried to do the same with with SERVER_B, and no luck. Changed DNS, removed from AD, and it wouldn't connect. Found out that there were some SPN's in AD set up, so removed those and tried again. I then could connect to SERVER_B (DB), SERVER_B (SSAS Named Instance), but not SERVER_B (SSAS Default Instance). I could connect to SERVER_B (SSAS Default Intance WITH the Port #), but I need to be able to connect without the port number. I am at a loss to as why I can't connect to the default instance without a port #. Not sure if it is SPN's in AD, or another AD issue, or something else. Pretty sure it isnt something on the server (because SERVER_C works!) Any insight or suggestions would be greatly helpful!!

    Read the article

  • Java Deployment Ruleset not working

    - by adbertram
    I've created a Java Deployment Ruleset that looks like this: <ruleset version="1.0+"> <rule> <id location="http://hpfweb.mydomain.com/" /> <action permission="run" version="1.6.0_20" /> </rule> <rule> <id location="http://*.mydomain.com" /> <action permission="run" /> </rule> <rule> </ruleset> I've created a self-signed cert, added it into the keystore as well as Trusted Certification Authorities. I have an app at http://hpfweb.mydomain.com that requires Java v1.6.20 and will error out if any other version is attempted. When only this version is installed on the computer the application works. However, if a newer version is installed, it does not. As you can see, I've attempted to force the version to 1.6.0_20 in the ruleset. I've confirmed the deployment rule set is being applied successfully by going into the Java Control Panel -- Security and "view the active deployment rule set". It is exactly as you see here. I've also looked at the web source for the application and all references point to http://hpfweb* links. When the applet is launched I've brought up task manager and have confirmed the java.exe launched is coming from the jre6 directory. When the newer version is installed, I'm getting the error "accesscontrolexception - access denied (java.awt.AWTPermission.accessEventQueue".

    Read the article

  • Domain registrar transfering

    - by Mike Weerasinghe
    In 2004 I registered a domain name when I opened an account with DiscountASP.NET. I presume my domain registration was handled by a reseller. A domain tools who is search shows that registration services are provided by Znode LLC. I changed hosting companies and need to change DNS servers to point to my new hosting company but I have no idea how to do that. There is no control panel I can access. Ideally I would like to transfer registrar's. I emailed Znode support but I have not received any response. I called and left a message and they have not called back. My new hosting company wants an EPP authorization code in order to transfer my domain. I guess I need to get it from Znode LLC. Anyone have any ideas on how I might go about transferring my domain over to a new registrar? The domain name has not expired and is currently active. Thanks in advance for your help.

    Read the article

  • Will having 2 MX records pointing to different mail server types cause delivery issues?

    - by Lyken
    I've inherited a setup where the mail server is exchange 2010. For some reason, I'm not sure why there is 2 MX records setup. One being the exchange server which is the higher priority while the external (non-exchange) server is the secondary mx record. I don't believe this was done for redundancy reasons as the other mail server is not set to route mail back to the exchange server (it's just the webhosts email for their hosting) The client has been experiencing disappearing email, however after my investigations its not actually disappearing, but exchange is successfully receiving the mail and then passing it on to the external server. It isn't happening all the time, just with some email messages from some domains. My question: Is exchange passing the mail on because it can see the secondary MX record and is configured (somewhere) to send mail out? If so, how do I stop it? Is it as easy as just removing the second MX record pointing to the external mail server and exchange will stop passing mail on? I'm not exchange expert so I'm kinda stumped. Exchange MX tools are saying everything is setup and configured correctly from an external point of view.

    Read the article

  • corrupted, hidden, wireless network adapter from "Network Connections" in Windows 7

    - by srihari reddy
    The issue is that when I install a wireless network adapter on my Windows 7 Professional machine I have no connectivity, the system tray icon has a red X. First, I tried the obvious, install updated drivers from the manufacturer. When I did this, the Network Connections icon had gray bars and there was no connectivity. So I tried installling the network adapter on a different computer on the same network and I verified that it does work with no issues. Next, I ran scan disk with no issues. Next, I ran sfc as admin with no issues. At this point I turned to the router and turned SSID broadcast on but that didn't help. I turned MAC address filtering off at the router but that didn't help. Whenever I installed the original network adapter (a wireless N usb adapter with WPA2 TKIP+AES) it showed up as "Wireless Network Connection 2" with a grayed out icon and no connectivity. Lastly, I tried installing two different "verified working" usb wireless adapters on to the Windows 7 Pro machine. The results were the same "Wireless Network Connection 2" that had a green bar icon but no connectivity. I installed the manufacturers software and it indicated the NIC was not there even thought the driver installed successfully in Device Manager. I guess I should mention, I first tried (insanely in vain) to use the (worthless) Windows Network troubleshooter. The results were....drumroll please... There is a problem with the network adapter... well No Duh! Also, during all of this the network adapter is always showing as "Working Properly" in the properties dialogue of Device Manager for the wireless NIC. I checked for hidden devices in Device Manager but there were none. Here is my fundamental question that I've tried to find in the Windows 7 support center with no luck. How do I remove/delete/uninstall network adapters from the Windows 7 registry? in particular hidden, corrupted network adapters, that used to be working.

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN [migrated]

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Email test deferred (mail transport unavailable) with ClamAV

    - by dirt
    I'm trying to set up a simple new mail server; when I send a test email to the server the email is getting hung up during delivery (user mapping is found) and the email is never found in /home/user/Maildir/new Here is my maillog after a fresh reboot and test email, there are a few warnings I am unfamiliar with. Can you please point me in the right direction? Oct 25 14:54:57 loki dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Oct 25 14:54:58 loki postfix/postfix-script[1369]: starting the Postfix mail system Oct 25 14:54:58 loki postfix/master[1370]: daemon started -- version 2.6.6, configuration /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: request to update table btree:/etc/postfix/smtpd_scache in non-postfix directory /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: redirecting the request to postfix-owned data_directory /var/lib/postfix Oct 25 14:56:00 loki postfix/smtpd[1455]: connect from mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/smtpd[1455]: 1CF5E20A8B: client=mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/cleanup[1461]: 1CF5E20A8B: message-id= Oct 25 14:56:01 loki postfix/qmgr[1379]: 1CF5E20A8B: from=, size=1788, nrcpt=1 (queue active) Oct 25 14:56:01 loki postfix/qmgr[1379]: warning: connect to transport private/scan: No such file or directory Oct 25 14:56:01 loki postfix/error[1462]: 1CF5E20A8B: to=, orig_to=, relay=none, delay=0.18, delays=0.15/0.02/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) Oct 25 14:56:01 loki postfix/smtpd[1455]: disconnect from mail-ob0-f180.google.com[209.85.214.180] master.cf snippets: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING scan unix - - n - 16 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes 127.0.0.1:10026 inet n - n - 16 smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_client_restrictions= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8

    Read the article

  • DNS Resolution doesn't work after uninstalling Cisco VPN & Deterministic Network Enhancer in Win 7

    - by Craig M
    I just upgraded my home PC to Windows 7 Ultimate 32 bit. After trying various methods to get the Cisco VPN client to work, I gave up and decided to just run it in XP mode. The last steps I tried were in this article ( http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/d880dfe5-7f44-4955-8620-2a9355d8ea8b/ ) After that, I uninstalled the Cisco client and rebooted. I uninstalled the Deterministic Network Enhancer and rebooted again. Both uninstalled successfully, but now I'm not able to resolve any DNS. The only way I can resolve DNS is to reinstall the DNE, reboot, and uninstall the DNE. Then I am able to resolve DNS lookups until I reboot again. Once it's rebooted, no more DNS. Any ideas? Edit: I completely forgot I'd asked this question until harrymc posted his answer. I've since found out that to fix this problem, I need to disable my Local Area Connection and re-enable it. Once I do that I have no trouble making network connections until the next time I reboot at which point I repeat the process. It's annoying, but manageable since I reboot very infrequently.

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • How do I install a different OS on a Compaq Presario cq56 with preinstalled SuSE 11?

    - by McCoy
    Thing is, I don't have a clue of Linux systems, I usually use WinXP. Bought a notebook with SuSE 11 on it, because I have my XP licence and thought I could install that if I found the chipset drivers for the hardware (which I'm not completely sure I have the right versions of). Then I thought I'd give it a shot with the SuSE, looked nice enough. But I can't get my external hd to work (tried force mount) and the banshee doesn't do anything like playing video. Since that is one of the two main purposes of this notebook, I need to get that to work. Tried downloading VLC player, but that only works with SuSE 11.1 upwards. So I downloaded a SuSE 11.3 and burned the iso. But surprise, no way the notebook would boot from cd. Same with the XP cd (considered setting up a dual boot). And no, I can't get to BIOS to reset to default, either. So I can basically do nothing else than going online with this thing and that's not enough for me (gamer in withdrawal, yikes!). I need at least to get to my firefox profile on the external hd and be able to watch video. Can somebody please help me? I think at this point I'd prefer to install XP and MAYBE the SuSE 11.3 after that. I'm not a native speaker, so please speak plainly, thanks. :) Edit: if this is impossible, could someone please help me with the external hd mount and video playback? Edit: Found out how to boot from cd by now. But still no XP, because I get bluescreen after bluescreen while setup is loading files. I guess it's the missing SATA drivers...

    Read the article

  • Cannot login to zabbix web portal

    - by hlx98007
    I've managed to install Zabbix22-server on CentOS 6.x along with php-fpm and nginx. I can view the page of 127.0.0.1 but I can only see this: After clicking the "Login" button, the page is the same: What can I do to make it work as expected, so that I can login as admin? Here are some confs: nginx_zabbix.conf: server { listen 80; add_header X-Frame-Options "SAMEORIGIN"; access_log /var/log/nginx/zabbix.log; error_log /var/log/nginx/zabbix.err.log; client_max_body_size 500M; # This folder is a soft link to /usr/share/zabbix # the permssion has been set to nginx:nginx recursively. root /var/www/zabbix; location / { index index.html index.htm index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; fastcgi_param PATH_INFO $path_info; } } php-fpm is using its default values, with permission user/group set to nginx (rather than apache) Folder /var/lib/php/session has been set to nginx:nginx with permission 770. SELinux is set to disabled. I've restarted everything up to this point.

    Read the article

  • Need troubleshooting advice for intermittent dns problems with requests on isp nameservers

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. My website is on a server running linux with Plesk. My dns records are configured with plesk (so my server is its own dns master). Domain name is registered with godaddy. I'm not real knowledgeable about dns, so I don't really know how to begin with troubleshooting. I've started learning to use dig, but while I can read the manpage to learn the syntax, I don't really know what questions to ask. Since the problem is intermittent I haven't been able to really catalog many symptoms. Symptoms I have observed: Certain people repeatedly reported intermittent problems connecting to my website. This was only from certain networks. (Ex: One guy could connect reliably from his office but not his home.) Sometimes I notice my browser taking a long time looking up the hostname for my site (Firefox shows a message in the status bar at the bottom). For me this is in the ten second range. ssh connections from anywhere to my server take a long time to connect but then seem to work fine once connected. So hopefully the folks on serverfault can point me to a good beginner tutorial for understanding dns, and suggest troubleshooting questions to ask next time one of my users reports connectivity problems.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

  • BGP Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about bgp multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • 500 error with deploying rails application via apache2+passenger

    - by user1633983
    I finally completed my own app, so the only work left is deploying the app. I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger. I installed passenger gem like this: sudo gem install passenger rvmsudo passenger-install-apache2-module and I configured apache settings as what the installation message says. I added below lines in the middle of /etc/apache2/apache2.conf file. LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17 PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby and, I appended below lines in /etc/apache2/sites-available/default file. <VirtualHost *:80> ServerName localhost # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /home/admin/homepage/public <Directory /home/admin/homepage/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> But when I restart the apache service and hit the address, 500 error occurs. At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'. Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache. What should I do to fix this problem? Do I need to configure something inside my app before deployment?

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

  • Cleaning a proxy/phishing trojan from Windows XP computer

    - by i-g
    I am trying to remove an interesting trojan from a Windows XP computer. It manifests itself as a phishing page (screenshot linked) that appears after the user tries to log on to eBay. So far, I haven't found any other web sites that are affected. As you can see, the trojan intercepts browser connections (all installed browsers are affected) and injects this phishing page. The address looks like it's ebay.com, but HTTPS verification doesn't work (no lock icon or green bar in Firefox.) At some point, Trojan.Dropper appeared on the computer. I removed it with Malwarebytes Anti-Malware. Although it reappeared several times, it seemed to be gone after I booted into Safe Mode and did a full system scan with MBAM. Now, however, a different trojan has appeared on the machine; I suspect it was installed by Trojan.Dropper. So far, MBAM, Ad-Aware, and Spybot S&D have been unable to remove it. I've looked for it in the HijackThis log but haven't found anything conclusive. Has anyone run across a trojan like this before? Where would I start looking for it to remove it manually? Thank you for reading.

    Read the article

  • Connect over WiFi to SQL Server from another computer

    - by Bronzato
    I tried to connect over WiFi to SQL Server with SQL Server Management Studio from another computer, but it failed. I have a computer with Windows 7 & SQL Server 2008 (lets say the server computer). Next to it I have a freshly installed computer with Windows 7 & SQL Server Management Studio (let's say the client computer). What I did on the server computer: Configure firewall by enabling port 1433 Enabled network protocols (TCP/IP) inside SQL Server Configuration Manager Checked Allow remote connections to this server in server properties in the SQL Server Management application. Started SQL Server Browser Restarted services (SQL Server Browser is stopped at this point, but I don't think it is necessary. Is it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to SQL Server on my server computer. Ok, something new with this problem: Until now, I successfully connected to my "server computer" with Management Studio. What I did is type the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of SQL server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name. Now my new challenge is to be successful in connecting my VB6 application to this remote database located on my "server computer". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's going wrong?

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Debian/OVH: How to configure multiple Failover IP on the same Xen (Debian) Virtual Machine?

    - by D.S.
    I have a problem on a Xen virtual machine (running latest Debian), when I try to configure a second failover IP address. OVH reports that my IP is misconfigured and they complaint they receive a massive quantity of ARP packets from this IPs, so they are going to block my IP unless I fix this issue. I suspect there's a routing issue, but I don't know (and can't find any useful info on the provider's website, and their support doesn't provide me a valid solution, just bounce me to their online - useless - guides). My /etc/network/interfaces look like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address AAA.AAA.AAA.AAA netmask 255.255.255.255 broadcast AAA.AAA.AAA.AAA post-up route add 000.000.000.254 dev eth0 post-up route add default default gw 000.000.000.254 dev eth0 # Secondary NIC auto eth0:0 iface eth0:0 inet static address BBB.BBB.BBB.BBB netmask 255.255.255.255 broadcast BBB.BBB.BBB.BBB And the routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 000.000.000.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 0.0.0.0 000.000.000.254 0.0.0.0 UG 0 0 0 eth0 In these examples (true IP addresses are replaced by fake ones, guess why :)), 000.000.000.000 is my main server's IP address (dom0), 000.000.000.254 is the default gateway OVH recommends, AAA.AAA.AAA.AAA is the first IP Failover and BBB.BBB.BBB.BBB is the second one. I need both AAA.AAA.AAA.AAA and BBB.BBB.BBB.BBB to be publicly reachable from Internet and point to my domU, and to be able to access Internet from inside the virtual machine (domU). I am using eth0 and eth0:0 because due to OVH support, I have to assign both IPs to the same MAC address and then create a virtual eth0:0 interface for the second IP. Any suggestion? What am I doing wrong? How can I stop OVH complaining about ARP flood? Many thanks in advance, DS

    Read the article

  • HP Officejet 4500 G510n-z Not Showing up in Remote Desktop (Terminal Services)

    - by Greg_the_Ant
    I installed this printer on a windows XP machine. First using the wireless option, and later using USB. In both cases when I connect to my other computer (also Windows XP) via terminal services and check printers in the local resources tab it does not show up on the remote session. I used to have a Samsung connected to my local computer over USB and and that worked fine over terminal services. Things I tried so far: I did read this page and installed the software fix on both computers: (Printers that use ports that do not begin with...) I installed the minimum HP software install on the remote computer and that didn't help either. I also tried running the add new printer wizard on the remote computer: I selected "local printer attached to this computer" and did not check the "automatically.." option. On the next page of the wizard I can select an option for "use the following port". I see options for TS001 through TS009 there. I'm assuming those are coming from the local machine. I tried clicking each one and then checking "have disk" and pointing it to C:\3be8dc611b11322e8ddf8a67\i386\msxpsdrv.inf 1 but for every single TS00.. port it says "The specified location does not contain information about your hardware." Any help would be greatly appreciated. I'm pretty stuck at this point. 1 C:\3be8dc611b11322e8ddf8a67 is the folder I extracted the HP driver software to after I downloaded it.

    Read the article

  • Lose internet connection, yet online games continue

    - by Mike
    For the past week or so, my internet connection has been anything but stable. Restarting my modem/router always fixes the problems, but since it has occurred so often, I'm noticing confusing patterns which I was hoping someone could help answer. My internet connection kicks out about 4-5 times a day. The sure-fire way to fix it is to restart my all-in-one modem/router. Sometimes I can diagnose the problem on my laptop which resets my wireless network adapter and fixes the problem, but not always. If that doesn't fix the problem, it usually reports that the connection between the modem and internet is the problem which requires a restart of the router. The odd thing which baffles me is that my connection is supposedly lost such that no browsers can connect to sites, yet things like online games still continue to play without issue. How is this possible? I thought maybe the game was running locally on my PC but that couldn't be the answer because I was still getting messages from other players. So my real question is: How can my internet browsers (firefox, chrome, even IE) lose connection to the internet, but other applications like online games not? Am I actually losing connection or am I mistaken? Edit: I'd also like to add that netflix on my PS3 which is directly connected to the same access point will also lose connection. So internet browsers and netflix lose their internet connection while online games continue without an issue.

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >