Search Results

Search found 34826 results on 1394 pages for 'valid html'.

Page 1190/1394 | < Previous Page | 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197  | Next Page >

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • SQL 2008 SP2 RsClientPrint ActiveX - "Unable to load client print control"

    - by Miles
    We recently updated our SQL 2008 server to use SP 2 and its causing a few headaches. We use SSRS on this server and when a client tries to print a report by the built-in print function, we're needing to download the RsClientPrint ActiveX control from the server from the client gets the following error Unable to load client print control. We have about 700 computers that are needing this fixed and I've followed the instructions found on the following URL: http://www.kodyaz.com/articles/client-side-printing-silent-deployment-of-rsclientPrint.aspx We have two issues: Most of the users who will be using this ActiveX control are not local administrators so they will not be able to install the control themselves Since there are so many computers, this has to be done silently behind the scenes run by a local admin account After following the information from the link above, we're able to put the files in the C:\Windows\System32 folder and register the DLL but we still get the same problem. The only small thing I've noticed is that in the HTML for the report page, everything that references a version is referencing version 2007.100.4000.00 and the version of the DLL that I pulled from the report server is 2007.100.1600.22. Also, for some clients that are local administrators, they are prompted every time to install the ActiveX control when they click print. This works successfully but we can't have the user asked if they want to install the same control every time they need to print.

    Read the article

  • How can I solve the apache2 httpd error "mixing * ports and non-* ports with a NameVirtualHost addre

    - by rrc7cz
    Here is the error I get when booting up Apache2: * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.xxx.com ServerAlias xxx.com # Indexes + Directory Root. DirectoryIndex index.html DocumentRoot /var/www/www.xxx.com # Logfiles ErrorLog /var/www/www.xxx.com/logs/error.log CustomLog /var/www/www.xxx.com/logs/access.log combined </VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites.

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • How to use cURL to FTPS upload to SecureTransport (hint: SITE AUTH and client certificates)

    - by Seamus Abshere
    I'm trying to connect to SecureTransport 4.5.1 via FTPS using curl compiled with gnutls. You need to use --ftp-alternative-to-user "SITE AUTH" per http://curl.haxx.se/mail/lib-2006-07/0068.html Do you see anything wrong with my client certificates? I try with # mycert.crt -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- # mykey.pem -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- And it says "530 No client certificate presented": myuser@myserver ~ $ curl -v --ftp-ssl --cert mycert.crt --key mykey.pem --ftp-alternative-to-user "SITE AUTH" -T helloworld.txt ftp://ftp.example.com:9876/upload/ * About to connect() to ftp.example.com port 9876 (#0) * Trying 1.2.3.4... connected * Connected to ftp.example.com (1.2.3.4) port 9876 (#0) < 220 msn1 FTP server (SecureTransport 4.5.1) ready. > AUTH SSL < 334 SSLv23/TLSv1 * found 142 certificates in /etc/ssl/certs/ca-certificates.crt > USER anonymous < 331 Password required for anonymous. > PASS [email protected] < 530 Login incorrect. > SITE AUTH < 530 No client certificate presented. * Access denied: 530 * Closing connection #0 curl: (67) Access denied: 530 I also tried with a pk8 version... # openssl pkcs8 -in mykey.pem -topk8 -nocrypt > mykey.pk8 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ...but got exactly the same result. What's the trick to sending a client certificate to SecureTransport?

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • How do I disable MEDIUM and WEAK/LOW strength ciphers in Apache + mod_ssl?

    - by superwormy
    A PCI Compliance scan has suggested that we disable Apache's MEDIUM and LOW/WEAK strength ciphers for security. Can someone tell me how to disable these ciphers? Apache v2.2.14 mod_ssl v2.2.14 This is what they've told us: Synopsis : The remote service supports the use of medium strength SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits. Solution: Reconfigure the affected application if possible to avoid use of medium strength ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More] Synopsis : The remote service supports the use of weak SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer either weak encryption or no encryption at all. See also : http://www.openssl.org/docs/apps/ciphers .html Solution: Reconfigure the affected application if possible to avoid use of weak ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More]

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • How to know if your computer is hit by a dnschanger virus?

    - by kira
    The Federal Bureau of Investigation (FBI) is on the final stage of its Operation Ghost Click, which strikes against the menace of the DNSChanger virus and trojan. Infected PCs running the DNSChanger malware at unawares are in the danger of going offline on this coming Monday (July 9) when the FBI plans to pull down the online servers that communicate with the virus on host computers. After gaining access to a host PC, the DNSChanger virus tries to modify the DNS (Domain Name Server) settings, which are essential for Internet access, to send traffic to malicious servers. These poisoned web addresses in turn point traffic generated through infected PCs to fake or unsafe websites, most of them running online scams. There are also reports that the DNSChanger virus also acts as a trojan, allowing perpetrators of the hack attack to gain access to infected PCs. Google issued a general advisory for netizens in May earlier this year to detect and remove DNSChanger from infected PCs. According to our report, some 5 lakh PCs were still infected by the DNSChanger virus in May 2012. The first report of the DNSChanger virus and its affiliation with an international group of hackers first came to light towards the end of last year, and the FBI has been chasing them down ever since. The group behind the DNSChanger virus is estimated to have infected close to 4 million PCs around the world in 2011, until the FBI shut them down in November. In the last stage of Operation Ghost Click, the FBI plans to pull the plug and bring down the temporary rogue DNS servers on Monday, July 9, according to an official announcement. As a result, PCs still infected by the DNSChanger virus will be unable to access the Internet. How do you know if your PC has the DNSChanger virus? Don’t worry. Google has explained the hack attack and tools to remove the malware on its official blog. Trend Micro also has extensive step-by-step instructions to check if your Windows PC or Mac is infected by the virus. The article is found at http://www.thinkdigit.com/Internet/Google-warns-users-about-DNSChanger-malware_9665.html How to check if my computer is one of those affected?

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

  • EC2 Ubuntu - Force instance to use internal IP

    - by Peter
    I've just set up a micro instance on EC2 (AMI ID ami-e59ca991). I had hoped to avoid charges for a year as my usage falls well within the bound of the free tier. I have been charged $0.01 for "regional data transfer". I read here that this is because my instance is talking to its self via it's external IP address. From what I've Googled it looks like you can stop the charges by making sure that the instance uses its internal IP address. However, when I ping the hostname of my instance internally (via an ssh session) it resolves to the instances internal IP address. How can I configure my instance so that I do not get these charges? Is it as simple as adding a line to my hosts file? Additionally, is this the real reason for the charge? I'm concerned that I've misunderstood the pricing somewhere. I have Apace and MySQL (with phpmyadmin) running on the machine - could I be being charged for data transfer associated with these (I have only one flat HTML page and I have only logged in via phpmyadmin - I have no data in my database). Edit: Additionally, my user account on MySQL was declared as: grant all privileges on *.* to 'peter'@'localhost'; Should I have instead used the internal hostname for the instance? grant all privileges on *.* to '[email protected]'; Cheers, Pete

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Phpmyadmin location for nginx

    - by multiformeinggno
    I installed nginx and phpmyadmin. I set up a domain with these parameters to test phpmyadmin: server { listen 80; server_name domain.com; root /usr/share/phpmyadmin; index index.php; fastcgi_index index.php; location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } And everything works properly (if I visit the domain I can login to phpmyadmin). The problem is that it was just for testing phpmyadmin, now I'd like to move this to my 'default' site. But I can't figure out how to have it on /phpmyadmin. Here's the config for the 'default' nginx site (where I'd like to put this /phpmyadmin location): server { server_name blabla; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /var/www/default; index index.php index.html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } ### NginX Status location /nginx_status { stub_status on; access_log off; } ### FPM Status location ~ ^/(status|ping)$ { fastcgi_pass unix:/var/run/php5-fpm.sock; access_log off; } }

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • Why do (Russian) characters in some received emails change when reading in David InfoCenter?

    - by waszkiewicz
    I'm using David InfoCenter as email Software, and I have troubles with some of my emails in Russian. It's only a few letters, in some emails (sent from different people), like for example the "R" ("P" in russian) will be shown as a "T". In other emails in Russian, the problem doesn't appear. Isn't it strange? Does anyone had the same problem already and found where it came from? When I transmit that email to an external mailbox (internet email account), it's even worse, and gives me symbols instead of all Russian letters... The default encoding was "Russian (ISO)", I changed it to "Russian (Windows)", but same problem. Another weird reaction is when I write an intern email and name it TEST in Russian (????), with ???? in the text window, it changes the title to "Oano"? But the content stays in Russian... With Mailinator I got the following, for message and subject "????": Subject: ???? [..] MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_000_00017783.4AF7FB71" This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 0KLQtdGB0YI= ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxNRVRBIG5hbWU9R0VORVJBVE9SIGNvbnRl bnQ9Ik1TSFRNTCA4LjAwLjYwMDEuMTg4NTIiPjwvSEVBRD4NCjxCT0RZIHN0eWxlPSJGT05UOiAx MHB0IENvdXJpZXIgTmV3OyBDT0xPUjogIzAwMDAwMCIgbGVmdE1hcmdpbj01IHRvcE1hcmdpbj01 Pg0KPERJViBzdHlsZT0iRk9OVDogMTBwdCBDb3VyaWVyIE5ldzsgQ09MT1I6ICMwMDAwMDAiPtCi 0LXRgdGCPFNQQU4gDQppZD10b2JpdF9ibG9ja3F1b3RlPjxTUEFOIGlkPXRvYml0X2Jsb2NrcXVv dGU+PC9ESVY+PC9TUEFOPjwvU1BBTj48L0JPRFk+PC9IVE1MPg== ------_=_NextPart_000_00017783.4AF7FB71--

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Configure Plesk only for Tomcat-Java

    - by AJIT RANA
    I need to configure tomcat on Linux dedicate server only for Java project through Plesk . Following services is running on it. '1.Apache on port 80 ' '2.Tomcat on port 8080/9080' '3.Mysql on port 3306 ' Now problem is this, i need to run only java project on this server from port 80 .this time user type my site name then default page call index.html or .php file from root directory of Apache. so how it can be possible to run java project from this server default port 80 after deploye .war(java project) file to this server. Because user who wants to access my site does not know its port number for Tomcat as here is 9080 and also deploy file name. Pls look below for detail about problem Suppose my sit name is www.example.com and hosted on Linux dedicate server with Plesk install on it with Apache, Tomcat and Mysql. Now for running my java project on it, i need to enter www.example.com:9080/java_projrect_name/ in browser. So how can i run this project only from URL www.example.com and it will call default file .jsp from java_project_name directory. I do not want to enter port number and java_project_name in url and my client who wants to access this project did not know about port number as well as project name . He knows only about URL as www.example.com and when he browses it then it should call default page from java_project directory. So to implement this what should we need to do? Pls help. Thanks

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

< Previous Page | 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197  | Next Page >