Search Results

Search found 13160 results on 527 pages for 'response redirect'.

Page 156/527 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • htaccess Rewriterule to make one query part of another

    - by Dan T
    I have a url /embed?t=X and I want to redirect it to /page/embed/X where X is any number of alpha numeric characters. I know must redirect rules go the other way but for the purpose of the applciation I need to reverse it. Any ideas? I have tried things like: RedirectRule ^embed\?t\=([a-zA-Z0-9]+)$ /page/embed/$1 but with no luck.

    Read the article

  • Check Avaibility of a page before loading using jquery/ajax

    - by overcomer
    Is it possible check the Accessibility of a page before loading it? I have a form, running on mobile device using wireless connection. The problem is: not always this connection is avaible and i would like to alert the user when is doing a submit or an unload of the page. The problem is that the page contains elements doing redirect like this: <input type="button" value="MyText" onClick="script1;script2;...window.location='mylocation" /> If the user click on this button and the server is not achievable, i will recive some undesiderable errors. Also if I want to generalize my script i do not know the value of "mylocation" previously. The page contains elements to submit the Form also: <input type="submit" name="SUBMIT" value="MyValue" onClick="return eval('validationForm()')" /> For the submitting I'm using the ajaxForm plugin and it works quite well. This is a snippet of code: Thanks to your answer I found the solution to the problem. That's the code: function checkConnection(u,s){ $.ajax({ url:u, cache:false, timeout:3000, error: function(jqXHR, textStatus) { alert("Request failed: " + textStatus ); }, success: function() { eval(s); } }); } $(document).ready(function() { // part of the function that checks buttons with redirect // for any input that contain a redirect on onClick attribute ("window.locarion=") $("input[type=button]").each(function(){ var script = $(this).attr("onClick"); var url = ""; var position = script.indexOf("window.location") ; if (position >= 0) { // case of redirect url = script.substring(position+17, script.lenght); url = url.split("\'")[0]; url = "\'"+url+"\'"; // that's my url script = "\""+script+"\""; // that's the complete script $(this).attr("onClick","checkConnection("+url+","+script+")"); } }); // part of the function that checks the submit buttons (using ajaxForm plugin) var is_error = false; var options = { error: function() { if (alert("Error Message")==true) { } is_error = true; }, target: window.document, replaceTarget: is_error, timeout: 3000 }; $("#myForm").ajaxForm(options); }); I hope that this will be usefull.

    Read the article

  • Rewrite only URLs that don't exist

    - by PeterBelm
    I'm looking for a way to rewrite URLs only if the path doesn't exist. This isn't to handle 404s, but to redirect page URLs to a shared PHP file (ie: '/contact-us/' - '/show_page.php?page=contact-us'). The basic redirect is easy enough to achieve, however I want to be able to override the default page by adding '/contact-us/index.php' in the site root. Is this achievable with mod_rewrite or would I have to do something else?

    Read the article

  • Back button in ajax update panel not working

    - by Domnic
    Im using updatepanel in my screen.. i have 2 pages when i click submit button in page1 then it ill be redirect to page2 ....in page2 i have one back button..i wrote click event onclick="history.go(-1)" when i click back it just redirect to page 1 but records already showned in page1 not displayed .how can i sove this problem? cai i use any scriptmanager?

    Read the article

  • Error while debug (role redirection)

    - by Chris White
    What is wrong with my role redirection, protected void Login1_LoggedIn(object sender, EventArgs e) { { if (Roles.IsUserInRole(Login1.UserName, "Aemy")) Response.Redirect("~/Admin/Home.aspx"); else if (Roles.IsUserInRole(Login1.UserName, "User")) Response.Redirect("~/Welcome/User1.aspx"); } } Error : The name 'Roles' does not exist in the current context

    Read the article

  • IIS 7.5 FTPS external access - 534 Policy requires SSL

    - by markmnl
    I have setup a FTP site that requires SSL but when I try connect to it externally I get the error: 220 Microsoft FTP Service 534 Policy requires SSL. I know - I set it so! Why doesnt it fetch the SSL cert from the site and allow me to logon?! (Incidentally beware of all the tutorials that Allow but do not Require SSL - while that will solve the problem it will be because SSL is not being used!). I suspect it may be I need a client that supports FTPS (FTP over SSL) and Windows explorer just uses IE which does not. But trying FileZilla and WinSCP I get a little further but then it hangs on TLS/SSL negotiation expecting a response from the server.... UPDATE: I have tried (from: http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/): Configure the Passive Port Range for the FTP Service. Configure the external IPv4 Address for a Specific FTP Site. Configure the firewall to allow the FTP service to listen on all ports that it opens. Disabling stateful FTP filtering so that Windows Firewall will not block FTP traffic. And still I get (in FileZilla trying both Active and Passive): Status: Connecting to 203.x.x.x:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: AUTH TLS Response: 234 AUTH command ok. Expecting TLS Negotiation. Status: Initializing TLS... Error: Connection timed out Error: Could not connect to server The Windows firewall logs unhelpfully have nothing to say.. UPDATE2: Turning the firewall off does not resolve the problem. I cannot believe how difficult it is to get something so simple to work and even once following the documentation it does not work. UPDATE3: Running FileZilla locally connecting through the loopback works in Active mode, in Passive mode I get up to: Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -53: Error in the push function. Turning the firewall off at both ends I can still not connect the client and get the same error as above.

    Read the article

  • Including hostname in apache logwatch reports

    - by Robert Munteanu
    When hosting multiple domains with apache it's useful to see the logwatch apache output with the virtual host name included, but I only get: --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request /: 1 Time(s) /robots.txt: 1 Time(s) whereas I would like something like --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request example.com/: 1 Time(s) example.org/robots.txt: 1 Time(s) How can I achieve this with logwatch?

    Read the article

  • Why do two patterns (/.*) and (.*) match different strings? @per-directory (.htaccess) mod_rewrite RewriteRule

    - by Leftium
    Shouldn't the two patterns (/.*) and (.*) match the same string? My real question is actually: where did the "abc" go? Something funky seems to be happening inside the mod_rewrite engine... Given this .htaccess file in www/dir/: Options +FollowSymlinks RewriteEngine on RewriteRule (/.*) print_url_args.php?result=$1 A request for http://localhost/dir/abc/123/ results in: result ($1) = "/123/" $_REQUEST_URI = "/dir/abc/123/" If the / is removed from the pattern like RewriteRule (.*) print_url_args.php?result=$1 The same request for http://localhost/dir/abc/123/ results in: result ($1) = "print_url_args.php" $_REQUEST_URI = "/dir/abc/123/" update: posted rewrite log. 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] add path info postfix: C:/db/www/dir/abc - C:/db/www/dir/abc/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/abc/123/ - abc/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] applying pattern '(/.*)$' to uri 'abc/123/' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (2) [perdir C:/db/www/dir/] rewrite 'abc/123/' - 'print_url_args.php?result=/123/' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) split uri=print_url_args.php?result=/123/ - uri=print_url_args.php, args=result=/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (2) [perdir C:/db/www/dir/] strip document_root prefix: C:/db/www/dir/print_url_args.php - /dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (1) [perdir C:/db/www/dir/] internal redirect with /dir/print_url_args.php [INTERNAL REDIRECT] 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/print_url_args.php - print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (3) [perdir C:/db/www/dir/] applying pattern '(/.*)$' to uri 'print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (1) [perdir C:/db/www/dir/] pass through C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] add path info postfix: C:/db/www/dir/abc - C:/db/www/dir/abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/abc/123/ - abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] applying pattern '(.*)$' to uri 'abc/123/' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (2) [perdir C:/db/www/dir/] rewrite 'abc/123/' - 'print_url_args.php?result=abc/123/' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) split uri=print_url_args.php?result=abc/123/ - uri=print_url_args.php, args=result=abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (2) [perdir C:/db/www/dir/] strip document_root prefix: C:/db/www/dir/print_url_args.php - /dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (1) [perdir C:/db/www/dir/] internal redirect with /dir/print_url_args.php [INTERNAL REDIRECT] 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/print_url_args.php - print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] applying pattern '(.*)$' to uri 'print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (2) [perdir C:/db/www/dir/] rewrite 'print_url_args.php' - 'print_url_args.php?result=print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) split uri=print_url_args.php?result=print_url_args.php - uri=print_url_args.php, args=result=print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (1) [perdir C:/db/www/dir/] initial URL equal rewritten URL: C:/db/www/dir/print_url_args.php [IGNORING REWRITE]

    Read the article

  • Real-time log parsing and reporting

    - by Alienfluid
    We have a small project we are working on part-time that runs on Nginx/MongoDB on Ubuntu 10.04 LTS Server. We'd like to be able to see reports on things like server load, requests/sec, response time, DB load, DB response time, etc. Is there an open source or free (as in beer) tool that can parse such logs and provide a real-time report? I looked into Splunk briefly, but I wanted to see if there are any others that are highly recommended.

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • http compression shared hosting apache/php

    - by gansodesoya
    Hi, I was sniffing the response header of one my sites and apparently is not using http compression to deliver responses because I'm not seeing the Content-Encoding: gzip in the response header. But the weird thing is that phpinfo() shows me HTTP_ACCEPT_ENCODING: gzip,deflate,sdch Im using a rackspace cloud site (shared hosting, cant access httpdconfig), and I really want to activate http compression but the support guys over there tells me that if the phpinfo() says it, its already on. thanks.!

    Read the article

  • How do I assign a number value to a non-numerical value in Excel

    - by Keyslinger
    Greetings I have an some survey responses with values like "VU" for "Very Unlikely" and "S" for Sometimes. Each survey response occupies a cell. For each cell containing a survey response, I want to fill another cell with a corresponding number. For example, for every cell containing "VU" I want to fill a corresponding cell with the number 1. How is this done?

    Read the article

  • Can I get advice on my nginx configuration (as a proxy in front of Jira and Confluence)?

    - by Nate
    I was wondering if I could get some advice on my nginx configuration. The config seems to be working, but I'm unsure if I'm doing everything properly. The basic idea is to have a Jira and Confluence server (in separate Tomcat instances) running on the same machine, with nginx in front to handle SSL for both. I want only SSL connections to be made to Jira/Confluence. Jira is running on 127.0.0.1:9090 and Confluence on 127.0.0.1:8080. Here is my nginx.conf, any advice or tips would be greatly appreciated. user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; # Our self-signed cert ssl_certificate /etc/ssl/certs/fissl.crt; ssl_certificate_key /etc/ssl/private/fissl.key; # redirect non-ssl Confluence to ssl server { listen 80; server_name confluence.example.com; rewrite ^(.*) https://confluence.example.com$1 permanent; } # redirect non-ssl Jira to ssl server { listen 80; server_name jira.example.com; rewrite ^(.*) https://jira.example.com$1 permanent; } # # The Confluence server # server { listen 443; server_name confluence.example.com; ssl on; access_log /var/log/nginx/confluence.access.log main; error_log /var/log/nginx/confluence.error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } # # The Jira server # server { listen 443; server_name jira.example.com; ssl on; access_log /var/log/nginx/jira.access.log main; error_log /var/log/nginx/jira.error.log; location / { proxy_pass http://127.0.0.1:9090/; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } }

    Read the article

  • question about that concept of linux install in flash card.

    - by Johnny
    1.i can't understand why Linux on flash card need install, does it simply copy certain file to certain location in flash card? ---i mean ,plan it in a response file,then one program read the plan in response file and write certain format to flash card. 2.does the file system bind tightly to the linux kernel? is it possible let each kernel,user,app have its own root? rather than mount everything under one single / "root"

    Read the article

  • Apache httpOnly Cookie Information Disclosure CVE-2012-0053

    - by John
    A PCI compliance scan, on a CentOS LAMP server fails with this message. The server header and ServerSignature don't expose the Apache version. Apache httpOnly Cookie Information Disclosure CVE-2012-0053 Can this be resolved by simply specifying a custom ErrorDocument for the 400 Bad Request response? How is the scanner determining this vulnerability, is it invoking a bad request then looking to see if it's the default Apache 400 response?

    Read the article

  • nginx location rewrite but proxy_rewrite off

    - by Jan
    I'm trying to use nginx for proxying requests to my internal backend. My configuration reads as follows: location /Shibboleth.sso { proxy_pass internal-backend; # ip proxy_redirect off; } But, my redirects are always rewritten.. My backend returns a response like https://www.google.de/test and my browser receives https://www.mydomain.de/test How do I get nginx to just forward the response?

    Read the article

  • Ubuntu 9.10 Virtual Machine does not respond on VMWare Fusion

    - by mgpyone
    I've just installed Ubuntu 9.10 Edition on my VMWare Fusion. host is Mac. I've installed successfully and able to use . Then, I've shut down and reopen again. Then, it's no longer response. like that I've check Setting. it shows that running. I've waited for long time but no response yet. Any idea better then reinstalling again ?

    Read the article

  • A small area on my Windows 7 desktop cannot be clicked

    - by Annie
    IT is a small area within the normal icons area on desktop of Windows 7 64bits version. It looks normal but when I try to click on it, it would have no response. If I move a folder on this area, then I cannot click on the folder. Even when there is nothing over there, when I click on it then it would not have any response like there should be a manual after clicking right button. What could be the problem?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Maintain cookie session in Android

    - by datguywhowanders
    Okay, I have an android application that has a form in it, two EditText, a spinner, and a login button. The user selects the service from the spinner, types in their user name and password, and clicks login. The data is sent via POST, a response is returned, it's handled, a new webview is launched, the html string generated form the response is loaded, and I have the home page of whatever service the user selected. That's all well and good. Now, when the user clicks on a link, the login info can't be found, and the page asks the user to login again. My login session is being dropped somewhere, and I'm not certain how to pass the info from the class that controls the main part of my app to the class that just launches the webview activity. The on click handler from the form login button: private class FormOnClickListener implements View.OnClickListener { public void onClick(View v) { String actionURL, user, pwd, user_field, pwd_field; actionURL = "thePageURL"; user_field = "username"; //this changes based on selections in a spinner pwd_field = "password"; //this changes based on selections in a spinner user = "theUserLogin"; pwd = "theUserPassword"; List<NameValuePair> myList = new ArrayList<NameValuePair>(); myList.add(new BasicNameValuePair(user_field, user)); myList.add(new BasicNameValuePair(pwd_field, pwd)); HttpParams params = new BasicHttpParams(); DefaultHttpClient client = new DefaultHttpClient(params); HttpPost post = new HttpPost(actionURL); HttpResponse response = null; BasicResponseHandler myHandler = new BasicResponseHandler(); String endResult = null; try { post.setEntity(new UrlEncodedFormEntity(myList)); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } try { response = client.execute(post); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { endResult = myHandler.handleResponse(response); } catch (HttpResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } List cookies = client.getCookieStore().getCookies(); if (!cookies.isEmpty()) { for (int i = 0; i < cookies.size(); i++) { cookie = cookies.get(i); } } Intent myWebViewIntent = new Intent(MsidePortal.this, MyWebView.class); myWebViewIntent.putExtra("htmlString", endResult); myWebViewIntent.putExtra("actionURL", actionURL); startActivity(myWebViewIntent); } } And here is the webview class that handles the response display: public class MyWebView extends android.app.Activity{ private class MyWebViewClient extends WebViewClient { @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { view.loadUrl(url); return true; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.web); MyWebViewClient myClient = new MyWebViewClient(); WebView webview = (WebView)findViewById(R.id.mainwebview); webview.getSettings().setBuiltInZoomControls(true); webview.getSettings().setJavaScriptEnabled(true); webview.setWebViewClient(myClient); Bundle extras = getIntent().getExtras(); if(extras != null) { // Get endResult String htmlString = extras.getString("htmlString"); String actionURL = extras.getString("actionURL"); Cookie sessionCookie = MsidePortal.cookie; CookieSyncManager.createInstance(this); CookieManager cookieManager = CookieManager.getInstance(); if (sessionCookie != null) { cookieManager.removeSessionCookie(); String cookieString = sessionCookie.getName() + "=" + sessionCookie.getValue() + "; domain=" + sessionCookie.getDomain(); cookieManager.setCookie(actionURL, cookieString); CookieSyncManager.getInstance().sync(); } webview.loadDataWithBaseURL(actionURL, htmlString, "text/html", "utf-8", actionURL); } } } I've had mixed success implementing that cookie solution. It seems to work for one service I log into that I know keeps the cookies on the server (old, archaic, but it works and they don't want to change it.) The service I'm attempting now requires the user to keep cookies on their local machine, and it does not work with this setup. Any suggestions?

    Read the article

  • osx web service spawns icon in taskbar - osx - while drawing image

    - by wuntee
    I have a web endpoint that displays an image of a string... When the following code is run (in tomcat) it spawns a java icon in the taskbar on OSX. Not sure if it is a problem, or whats going on. Looking for some sort of explination @RequestMapping("/text/{text}") public void textImage(HttpServletResponse response, @PathVariable("text") String text){ response.setContentType("image/png"); try{ OutputStream os = response.getOutputStream(); BufferedImage bufferedImage = new BufferedImage( (text.length()*10) , 14, BufferedImage.TYPE_INT_ARGB); Graphics2D g2d = bufferedImage.createGraphics(); g2d.setBackground(Color.WHITE); g2d.setPaint(Color.BLACK); Font font = new Font("sansserif", Font.PLAIN, 12); g2d.setFont(font); g2d.drawString(text, 0, 12); ImageIO.write(bufferedImage, "png", os); } catch(Exception e) { // nothing we can do, simply log the error logger.error("Could not draw string: ", e); } }

    Read the article

  • CDO.Message problem on Windows Server 2008

    - by dcrowell@
    I have a Classic ASP page that creates a CDO.Message object to send email. The code works on Window Server 2003 but not 2008. On 2008 an "Access is denied" error gets thrown. Here is a simple test page I wrote to diagnose the problem. How can I get this to work on Windows Server 2008? dim myMail Set myMail=CreateObject("CDO.Message") If Err.Number < 0 Then Response.Write ("Error Occurred: ") Response.Write (Err.Description) Else Response.Write ("CDO.Message was created") myMail.Subject="Sending email with CDO" myMail.From="[email protected]" myMail.To="[email protected]" myMail.TextBody="This is a message." myMail.Send set myMail=nothing End If

    Read the article

  • Calling WCF Service from Action Script 2

    - by Frank
    Hi All, I am a .NET programmer working with a Flash designer on a project. The design is that they will create a flash UI (implemented with AS2) to present a questionnaire. After it is completed by an end user, the will send me (a .net web service of some form) the answers to the questionnaire, I will perform a calculation, and I will send a response back (the response will likely be a single integer, though it may be a touple of (integer score, string description). Neither myself nor the designer is knowledgeable of Action Script. Does anyone have a snippet for such web service calls in AS2? Are there any soap libraries for AS2 that we could use, or should I expose a RESTful interface? Can it be as simple as having the designer concat the questionnaire answers into the query string of the service URL? What would be a typical data format for my response (xml, json, plain text) Thanks in advance for your help. Frank

    Read the article

  • Facebook FQL Question

    - by Michael
    I'm trying to use the Facebook Javascript API to run FQL queries, and it works fine if I try and get users by username or uid, but doesn't work when I'm searching by name. function get_username() { var name = prompt("Enter name: ") FB.api( { method: 'fql.query', query: 'SELECT username FROM user WHERE name in "'+name+'"' }, function(response) { var x = response[0].username alert('Username is ' + x); } ); } I realize that this will probably return multiple users, but I can't figure out how to tell if it's returning multiple users or no users at all, it seems to freeze after trying to get response[0].username. I'm probably making a beginner mistake but any ideas?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >