Search Results

Search found 23754 results on 951 pages for 'unobtrusive javascript'.

Page 753/951 | < Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >

  • Can you work with SSJS on apache?

    - by DMin
    I just stumbled upon SSJS - Server Side JavaScript - It sounds very interesting but the information about it on the internet is very scant and contradictory. Do you know if this can be implemented on an Apache webserver? If yes, does this work by default or does something else need to be installed and if it does work by default how are the files to be named and executed. Thanks in advance :)

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • Can't re-open "Group My Tabs" page after killing Firefox

    - by isomorphismes
    Due to memory problems I closed Firefox whilst the Group My Tabs (Ctrl+e) feature was displaying the multiple tab groups. After 20 minutes the process still hadn't finished so killall firefoxed it. When I restart Firefox, there is a Javascript process called tabview.js that hangs unless I click Stop at the warning screen. If I do hit Stop then I can no longer open the tabview, so I can't get to any of the tabs except in the subgroup I was last looking at. Any suggestions?

    Read the article

  • Hardening Word and Reader against exploits

    - by satuon
    I have recently heard a lot about exploits for PDF and DOC files on Windows, which when opened in Reader or Word would infect the computer. I'm assuming most of those exploits rely on some kind of active content, I've heard that Reader allows JavaScript for example. I already have antivirus, but I've heard they often don't catch those types of exploits, so I want to try a little proactive defense. Is there a way to harden Reader and Word by disabling plugins or options that are often used by exploits?

    Read the article

  • Help needed setting up nginx to serve static files.

    - by Catalina
    Hi Guys, I'm trying to setup nginx to serve static files. Basically all I need is to have http://mydomain.com/site_media/ point to /var/django/myproject/site_media. I have tried so many configurations and when I test it I always get a 404 error for static files. Can anyone please tell me what I'm doing wrong or how I should be setting this up? This is my current nginx configuration file. user www-data; worker_processes 1; #error_log /usr/local/nginx/logs/error.log; #pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include mime.types; default_type application/octet-stream; #access_log /usr/local/nginx/logs/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; proxy_next_upstream error; server { listen 80; # Allow file uploads client_max_body_size 50M; location ^~ /site_media/ { root /var/django/myproject/site_media; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /site_media/favicon.ico; } location = /robots.txt { rewrite (.*) /site_media/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } #include /usr/local/nginx/sites-enabled/*; } Thanks, Cata

    Read the article

  • how to spoof compelte browser identity

    - by Greenleader
    I found question on how to spoof user agent. I dont' want to spoof only user agent. I want to spoof everything from user agent, to Accept headers to http headers and also the information javascript can tell about browser - screen resolution and depth, class cpu, platform, device name, etc. Do you know of a way to achieve this in any browser out there ? I don't want 10 plugins to achieve this. I'd like unified way.

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • run browser from remote machine

    - by cathy02
    Hello, I can connect to a remote machine using ssh, I would like to open a URL from the remote machine. Well, I can do this using lynx (a command line browser), but then, javascript etc... gets messed up. Is there a way I can open the URL and see it in my local machine firefox browser for example ?? Thanks

    Read the article

  • What happend to walterzorn.com?

    - by Vinz
    I just wanted to use the function plot from waterzorn.com but it seems like the site is down, I only get 404 responses. Does anybody know what's up with the site and if/when it will be back online? So far I could only find the following thread which doesn't have any answers. If you don't know: Walter Zorn had a JavaScript framework that enabled you to draw vectors with DHTML back before there was canvas.

    Read the article

  • Why doesn't my web page load completely every time?

    - by Gayanee Wijayasekara
    I have an intranet website running on IIS 7. When I try to load my site, it reacts differently every time. Here are the following different scenarios that occur when I try to load my site: The site loads right away and is working properly The site loads slowly and some my styling/images/javascript did not appear to load correctly. I receive a "503 service unavailable" error Any ideas why this is happening?

    Read the article

  • A good file management/hosting/storage web service with embed-function?

    - by Andreas
    I am looking for a file management web service that lets me integrate the directory-view into a commercial website. Another requirement: User can register themselves, but need to be approved, before being able to download files. So something like box.net, but with more than just a a flash-widget. I would prefer some javascript, that can be embedded. Thanks for any recommendations.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Is the guideline: don't open email attachments or execute downloads or run plug-ins (Flash, Java) from untrusted sites enough to avert infection?

    - by therobyouknow
    I'd like to know if the following is enough to avert malware as I feel that the press and other advisory resources aren't always precisely clear on all the methods as to how PCs get infected. To my mind, the key step to getting infected is a conscious choice by the user to run an executable attachment from an email or download, but also viewing content that requires a plug-in (Flash, Java or something else). This conscious step breaks down into the following possibilities: don't open email attachments: certainly agree with this. But lets try to be clear: email comes in 2 parts -the text and the attachment. Just reading the email should not be risky, right? But opening (i.e. running) email attachments IS risky (malware can be present in the attachment) don't execute downloads (e.g. from sites linked from in suspect emails or otherwise): again certainly agree with this (malware can be present in the executable). Usually the user has to voluntary click to download, or at least click to run the executable. Question: has there ever been a case where a user has visited a site and a download has completed on its own and run on its own? don't run content requiring plug-ins: certainly agree: malware can be present in the executable. I vaguely recall cases with Flash but know of the Java-based vulnerabilities much better. Now, is the above enough? Note that I'm much more cautious than this. What I'm concerned about is that the media is not always very clear about how the malware infection occurs. They talk of "booby-trapped sites", "browser attacks" - HOW exactly? I'd presume the other threat would be malevolent use of Javascript to make an executable run on the user's machine. Would I be right and are there details I can read up on about this. Generally I like Javascript as a developer, please note. An accepted answer would fill in any holes I've missed here so we have a complete general view of what the threats are (even though the actual specific details of new threats vary, but the general vectors are known).

    Read the article

  • How to make firefox proxy authentication fail silently?

    - by Vincent McNabb
    At work, certain protocols are blocked, and websites that I visit try to access these protocols with Javascript. These sites work fine when these requests fail (except for whatever it's trying to do with the requests), but I have to click cancel on a multitude of proxy authentication dialogs. What I want to do is just have firefox silently ignore this, so I can use the website without having to click cancel 8 times on every action I make (this includes all the stack overflow style sites which is trying to make requests with the ws: protocol).

    Read the article

  • Nginx config - serving index.html not working

    - by Bill
    I can't figure out how to redirect / to index.html. I've gone through the threads on serverfault and I think I've tried every suggestion including: rewrite statements within location / index index.html at the server level, within location / and within static content moving node.js proxy statements to location ~ /i instead of within location / Obviously something is wrong somewhere else in my configuration. Here is my nginx.conf: worker_processes 1; pid /home/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; error_log /home/logs/error.log; access_log /home/logs/access.log combined; include sites-enabled/*; } and my server config located in sites-enabled server { root /home/www/public; listen 80; server_name localhost; # proxy request to node location / { index index.html index.htm; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:3010; proxy_redirect off; break; } # static content location ~ \.(?:ico|jpe?g|jpeg|gif|css|png|js|swf|xml|woff|eot|svg|ttf|html)$ { access_log off; add_header Pragma public; add_header Cache-Control public; expires 30d; } gzip on; gzip_vary on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1000; gzip_disable "msie6"; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; } Everything else is working just fine. Requests get proxied to node correctly and static content is served correctly. I just need to be able to forward requests made to / to /index.html.

    Read the article

  • How to write a ~/.firefoxrc?

    - by kev
    I want firefox sources ~/.firefoxrc automatically when I open a webpage. ~/.firefoxrc contains several javascript functions: Array.prototype.sum = function(){ for(var i=0,sum=0;i<this.length;sum+=this[i++]); return sum; } Array.prototype.max = function(){ return Math.max.apply({},this) } Array.prototype.min = function(){ return Math.min.apply({},this) } So I can use these functions in firebug console.

    Read the article

  • "Save as webpage, complete" problem ?

    - by Adelave
    i have problem when i am trying to save web page from browser using "Save as Webpage, complete" some of CSS/Javascript/Image are not saved, thus when i reopen saved page offline webpage can't display properly. How do i solve this? How do i made webpage can be saved properly from browser ? i don't want to use MHT ext. Because what my situation here is i need to give URL for web designer to launch URL, saved webpage and modify CSS/HTML. Thanks

    Read the article

  • Automatic sort for excel worksheet

    - by Joseph
    I want to create a to-do list in Excel that automatically sorts the to-do entries in a list, in order of ones to do first (closest deadlines). I would also like a section that shows the tasks for today and another for high-priority tasks coming up within a week. I have not programmed in Excel before. I know Python and JavaScript, but want an Excel solution that runs inside Excel (maybe using VBA, the Excel programming language). Is this sort of thing possible in Excel?

    Read the article

  • Notepad ++ regular expression

    - by arvindwill
    have javascript file will millions of lines. The problem is IE dont support ','(comma) followed by '}'(curly close bracket) by using notepadd++ need to find all the comma which is followed by curly close bracket. So regular expression \,.*\} works. but the problem between the comma and close bracket many tab space or newline or linefeed can be there . cant able to provide the newline with spaces in regular expression. like below one somestring, }

    Read the article

  • How to loop through all illustrator files in a folder (CS6)

    - by Julian
    I have written some JavaScript to save .ai files to two separate locations with different resolutions, one of them being cropped to a reduced size art board. (Courtesy of John Otterud / Articmill for the main part). There are other variables in the script that I am not using at present but I want to leave the functionality there for a later date/additional layers to export/other resolutions etc. I can't get it to loop through all files in a folder. I cannot find the script that works - or insert it at the right place. I can get as far a selecting the folder and I suppose creating an array but after that what next? This is the create array part of the script - // JavaScript Document //Set up vairaibles var destDoc, sourceDoc, sourceFolder, newLayer; // Select the source folder. sourceFolder = Folder.selectDialog('Select the folder with Illustrator files that you want to mere into one', '~'); destDoc = app.documents.add(); // If a valid folder is selected if (sourceFolder != null) { files = new Array(); // Get all files matching the pattern files = sourceFolder.getFiles(); I have inserted this at the beginning of the main script (probably where I am going wrong because I can select the folder but then nothing more) #target illustrator var docRef = app.activeDocument; with (docRef) { if (layers[i].name = 'HEADER') { layers[i].name = '#'+ activeDocument.name; save() } } // *** Export Layers as PNG files (in multiple resolutions) *** var subFolderName = "For_PLMA"; var subFolderTwoName = "For_VLP"; var saveInMultipleResolutions = true; // ... // Note: only use one character! var exportLayersStartingWith = "%"; var exportLayersWithArtboardClippingStartingWith = "#"; // ... var normalResolutionFileAppend = "_VLP"; var highResolutionFileAppend = "_PLMA"; // ... var normalResolutionScale = 100; var highResolutionScale = 200; var veryhighResolutionScale = 300; // *** Start of script *** var doc = app.activeDocument; // Make sure we have saved the document if (doc.path != "") { Then the rest of the export script runs on from there.

    Read the article

  • Facebook, Twitter, Yahoo doesn't work

    - by Toktik
    Some sites doesn't work normally, they are open, without css, images and javascript errors... Facebook stucks on static.ak.fbcdn.net Twitter stucks on a1.twimg.com Yahoo stucks on l.yimg.com On firefox I'm receiving Waiting for ...(any of those). I can access facebook only with SSL. Like https://facebook.com I ping them, only receive request timed out.

    Read the article

< Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >