Search Results

Search found 52413 results on 2097 pages for 'http proxy'.

Page 155/2097 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • System.Net.WebProxy in .NET Client

    - by NealWalters
    I don't yet have a good understanding of how the network is set up at my current client, but here is my issue. The vendor has a normal URL, but it might be a leased line. I have a .NET program that calls an external .asmx service. If I go to IE, Connections, LAN Settings, and uncheck "Automatic detect settings" and uncheck "Use Automatic Configuration Sript", then the .NET program runs and access the web service fine. But then, to access internet again from the browser, I have to re-check the two boxes, and so on throughout the day while we are testing and browsing. I'm hoping to put some code like this in the .NET program to avoid us having to constantly change: if (chkUseProxy.Checked) { myclient.Proxy = new System.Net.WebProxy(""); } I tried null and empty string and so far not able to connect without errors. Is this possible, and if so, what might be the correct parms for the constructor? Or are there other objects or properties that would need to be set? Thanks, Neal Walters

    Read the article

  • can I see all SQL statements sent over an ODBC connection?

    - by Dave Cameron
    I'm working with a third-party application that uses ODBC to connect to, and alter, a database. During certain failure modes, the end-results are not what I expect. To understand it better, I'd like some way of inspecting all the statements sent to the database. Is there a way to do this with ODBC? I know with JDBC I could use http://www.p6spy.com/ to see all statements sent, for example when debugging hibernate. p6spy is a "proxy" driver that records commands sent and forwards them on to the real JDBC driver. Another possibility might be a protocol sniffer that would capture statements over the wire. Although, I'm unsure if ODBC includes a standard wire protocol, or only specifieds the API. Does anyone know of existing tools that would allow me to do either of these things? Alternatively, is there another approach I could take?

    Read the article

  • Can I turn off an Apache Directive then turn it on in an include?

    - by javafueled
    I have a VirtualHost block that includes common configuration items, one directive is ProxyPreserveHost. Can I "procedurally" turn off ProxyPreserveHost for a Rewrite directive then have the include turn it back on? For example: <VirtualHost *:80> ServerName www.blah.com ... ... ProxyPreserveHost off RewriteRule /somepath http://otherhost/otherpath [P] Include /path/to/file/turning-on-ProxyPreserveHost </VirtualHost> The otherhost is on a CDN and preserving the host is creating some name resolution issue that is not allowing the proxying of content in the host namespace. ProxyReserveHost is only allowed in a Server Config or VirtualHost. It doesn't look like I can selectively turn it off for the ProxyPass and ProxyPassReverse directives (encapsulated in the proxy flag of mod_rewrite).

    Read the article

  • Best implementation for MySQL replication with Rails 3?

    - by vonconrad
    We're looking at potentially setting up replication for our primary MySQL database, and while setting up the replication seems pretty straight-forward, the application implementation seems a bit murkier. My first idea would be to set up a master-slave configuration and RW-splitting, with all write queries (CREATE, INSERT, UPDATE) going to master, and all read queries (SELECT) going to slave. Having read up on it, it seems that there are essentially two options for how to implement this with our app: Using an independent middleware layer for all MySQL connections, such as MySQL proxy or DBSlayer. However, the former is in Alpha and the latter has limited documentation. Using a Ruby-based gem/plugin, such as Octopus to achieve RW-splitting in the framework. If we wanted to go with a master-slave setup, what you recommend moving forward? The other thought I've had was to use a master-master configuration, but am unsure about the implementation of such a setup. Thoughts?

    Read the article

  • Should I go with Varnish instead of nginx?

    - by gotts
    I really like nginx. But recently I've found that varnish gives you an opportunity to implement smart caching revers proxy layer(with URL purging). I have a cluster of mongrels which are pretty resource-intensive so if this caching layer can remove some load from mongrels this can be a great thing. I didn't find a way to implement the caching layer(with for application pages; static content is cacheable of course) same with nginx.. Should I use Varnish instead? What would you recommend?

    Read the article

  • How can I add similar functionality to a number of methods in java?

    - by Roman
    I have a lot of methods for logging, like logSomeAction, logAnotherAction etc. Now I want all these methods make a small pause after printing messages (Thread.sleep). If I do it manually, I would do something like this: //before: public static void logSomeAction () { System.out.println (msg(SOME_ACTION)); } //after: public static void logSomeAction () { System.out.println (msg(SOME_ACTION)); try { Thread.sleep (2000); } catch (InterruptedException ignored) { } } I remember that Java has proxy classes and some other magic-making tools. Is there any way avoid copy-n-pasting N sleep-blocks to N logging methods?

    Read the article

  • Override l() function in Drupal

    - by Marco
    I'm currently working on a Drupal site (6.*), which when in production mode will be accessed through some kind of http proxy, which means I will have to rewrite all the links for my custom theme if the $_SERVER['HTTP_X_FORWARDED_SERVER'] variable is set to the domain people will access the site from. The site has a lot of internal linking, mostly through Views. My thought is that the easiest way to solve this would be to hook into the url() and/or the l() functions and post process the url before returning it if HTTP_X_FORWARDED_SERVER is set. My problem is that I can't figure out how to hook into these functions, or if it's even possible without touching the core, has anyone had to do this? How did you solve it?

    Read the article

  • How to validate a bunch of proxies against a URL?

    - by NJTechGuy
    I have a list of 100 proxies. The URL I am interested in is abc.com. I want to check the number of proxies which can successfully fetch this URL and the time taken for the same. I am hoping I made sense. I am a Python noob. I am looking for a code snippet. A helping hand is really appreciated :) Proxies : 200.43.54.212 200.43.54.212 200.43.54.212 200.43.54.212 URL : abc.com Desired result : Proxy isGood Time 200.43.54.112 n 23.12 200.43.54.222 n 12.34 200.43.54.102 y 11.09 200.43.54.111 y 8.85 p.s : All the above proxies have ports either 80 or 8080

    Read the article

  • How to redirect a live data stream adding to it another header and returning it on demand? (PHP)

    - by Ole Jak
    I have a url like http://localhost:8020/stream.flv On request to my php sctipt I want to return (be something like a proxy) all data I can get from that URL (so I mean my php code should get data from that url and give it to user) and my header and my beginning of file. So I have my header and some data I want to write in the beginning of response like # content headers header("Content-Type: video/x-flv"); header("Content-Disposition: attachment; filename=\"" . $fileName . "\""); header("Content-Length: " . $fileSize); # FLV file format header if($seekPos != 0) { print('FLV'); print(pack('C', 1)); print(pack('C', 1)); print(pack('N', 9)); print(pack('N', 9)); } How to do such thing?

    Read the article

  • Dynamic proxies to auto-save models

    - by atomman
    I'm trying to make some auto-magic happen in java using proxies to track objects and saving them when a set* method is called. I started of using java's built in Proxy, and everything works just fine, but from what I can understand I need a interface for every model, which is something that I'm trying to avoid. This is where CGLIB comes in, it allows me to create proxies of my models without the use of interfaces. BUT, how can I now retrieve the original object, the one I am trying to save? The optimal solution to be would be something like the EntityManager interface used by hibernate, where you keep your original object, but it is still tracked.

    Read the article

  • Logging on to two sites simultaneously

    - by James Wakefield
    I want to log on to two sites simultaneously to enable a single sign on solution. We have a smallish wiki that is created with Apple wiki and we have an intranet site on a aspx cms system by Elcom. Both use Active Directory for credentials. Currently they are on different domains, but we could enable a rewrite using our load-balancer (Citrix Netscaler) or IIS. These sites are on different servers, one a mysterious Mac system and the other an IIS v6.0 on windows 2003. Now I am almost certain that a reverse proxy set up will solve this but I really just need someone to agree that this solves this issue, and if there are things I should look out for what they might be. I just want to have an invisible log on screen in an iframe and enter clone the user name and password using javascript.

    Read the article

  • Problem between Glassfish and Spring Security Basic Authentication

    - by Raspayu
    Hi! I am enabling a simple HTTP Basic Authentication with Spring security in my project. My environment is an Glassfish Server (bundled with Netbeans), and almost everything works perfect: I have set up it to just ask for authentication with the POST method, with hardcoded users with "user-service", and it works with user names with no special characters. The problem comes when I set up an user with "@" or "." Here is the spring-security related part of my servlet.xml: <security:http> <security:intercept-url method="POST" pattern="/**" access="ROLE_USER" /> <security:http-basic/> </security:http> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider user-service-ref="uservice"/> </security:authentication-manager> <security:user-service id="uservice"> <security:user name="[email protected]" password="pswd1" authorities="ROLE_USER" /> <security:user name="[email protected]" password="pswd2" authorities="ROLE_USER" /> <security:user name="pepe" password="pepito" authorities="ROLE_USER" /> </security:user-service> I have looked also for what did the browser send to the listening port, and it sends right the par "username:password" in base 64, so i think the problem is in my server(Glassfish v3). Does anyone have any idea? Thanks in advance! Raspayu

    Read the article

  • When sending headers to download a PDF, Safari appends .html

    - by alex
    Here is the request and response headers http://www.example.com/get/pdf GET /~get/pdf HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.example.com Cookie: etc HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 02:20:43 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: Me Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Cache-Control: private Content-Disposition: attachment; filename="File #1.pdf" Content-Length: 18776 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 ---------------------------------------------------------- Basically, the response headers are sent by DOMPDF's stream() method. In Firefox, the file is prompted as File #1.pdf. However, in Safari, the file is saved as File #1.pdf.html. Does anyone know why Safari is appending the html extension to the filename?

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • Help Modifying Generic REST Helper PHP Example Code to Support XML DOM

    - by Jennifer Baker
    Hi! I found this example PHP source code at HTTP POST from PHP, without cURL I need some help modifying the example PHP source to support XML DOM for manipulating a REST API. I thought that if I update the CASE statement for the XML section below from $r = simplexml_load_string($res); to $r = new DOMDocument(); $r->load($res); that it would work but it doesn't. :( Any help would be appreciated. function rest_helper($url, $params = null, $verb = 'GET', $format = 'xml') { $cparams = array( 'http' => array( 'method' => $verb, 'ignore_errors' => true ) ); if ($params !== null) { $params = http_build_query($params); if ($verb == 'POST') { $cparams['http']['content'] = $params; } else { $url .= '?' . $params; } } $context = stream_context_create($cparams); $fp = fopen($url, 'rb', false, $context); if (!$fp) { $res = false; } else { // If you're trying to troubleshoot problems, try uncommenting the // next two lines; it will show you the HTTP response headers across // all the redirects: // $meta = stream_get_meta_data($fp); // var_dump($meta['wrapper_data']); $res = stream_get_contents($fp); } if ($res === false) { throw new Exception("$verb $url failed: $php_errormsg"); } switch ($format) { case 'json': $r = json_decode($res); if ($r === null) { throw new Exception("failed to decode $res as json"); } return $r; case 'xml': $r = simplexml_load_string($res); if ($r === null) { throw new Exception("failed to decode $res as xml"); } return $r; } return $res; }

    Read the article

  • Axis webservice calls fail sometimes, access.log shows content!

    - by epischel
    Hi, our app is a webservice client (axis 1) to a third party webservice (also axis 1). We use it for some years now. Since a few weeks, we (as a client) get sometimes HTTP status 400 (bad request) or read timeouts when calling the webservice. Strangely, the access.log of the service shows part of the request or the response instead of the URL. It looks like this (looks like the end of the request string) x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 16127 0 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 22511 1 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "il=\"true\"/><nsl:text xsi:type=\"xsd:string\" xsi:nil=\"true\"/></SOAPSomeOperation></soapenv:Body></soapenv:Envelope> Axis/1.4" 400 299 0 or (some string out of the what looks like the request) x.x.x.x -> y.y.y.y:8080 - - [timestamp] ":string\">some text</sometag><othertag>moretext" 400 299 0 or in some other cases it looks like two requests thrown together (... means xml string left out): x.x.x.x -> y.y.y.y:8080 - - [timestamp] "...</someop></soapenv:Body></soapenv:Envelope>:xsd=\"http://www.w3.org/2001/XMLSchema\"...</soapenv:Body></soapenv:Envelope>" 400 299 0 Application log does not give any hints. Frequency of such call is 1% of all calls to that service. The only discriminator I know of so far is that it happens since operations informed us that the service url changed because of "server migration". Has anyone experienced such phenomenon yet? Has somebody got an idea whats wrong and how to fix? Thanks,

    Read the article

  • Image Grabbing with False Referer

    - by Mr Carl
    Hey guys, I'm struggling with grabbing an image at the moment... sounds silly, but check out this link :P http://manga.justcarl.co.uk/A/Oishii_Kankei/31/1 If you get the image URL, the image loads. Go back, it looks like it's working fine, but that's just the browser loading up the cached image. The application was working fine before, I'm thinking they implemented some kind of Referer check on their images. So I found some code and came up with the following... $ref = 'http://www.thesite.com/'; $file = 'theimage.jpg'; $hdrs = array( 'http' = array( 'method' = "GET", 'header'= "accept-language: en\r\n" . "Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*\/*;q=0.5\r\n" . "Referer: $ref\r\n" . // Setting the http-referer "Content-Type: image/jpeg\r\n" ) ); // get the requested page from the server // with our header as a request-header $context = stream_context_create($hdrs); $fp = fopen($imgChapterPath.$file, 'rb', false, $context); fpassthru($fp); fclose($fp); Essentially it's making up a false referrer. All I'm getting back though is a bunch of gibberish (thanks to fpassthru) so I think it's getting the image, but I'm afraid to say I have no idea how to output/display the collected image. Cheers, Carl

    Read the article

  • Using Pastebin API in Node.js

    - by wiill
    I've been trying to post a paste to Pastebin in Node.js, but it appears that I'm doing it wrong. I'm getting a Bad API request, invalid api_option, however I'm clearly setting the api_option to paste like the documentation asks for. var http = require('http'); var qs = require('qs'); var query = qs.stringify({ api_option: 'paste', api_dev_key: 'xxxxxxxxxxxx', api_paste_code: 'Awesome paste content', api_paste_name: 'Awesome paste name', api_paste_private: 1, api_paste_expire_date: '1D' }); var req = http.request({ host: 'pastebin.com', port: 80, path: '/api/api_post.php', method: 'POST', headers: { 'Content-Type': 'multipart/form-data', 'Content-Length': query.length } }, function(res) { var data = ''; res.on('data', function(chunk) { data += chunk; }); res.on('end', function() { console.log(data); }); }); req.write(query); req.end(); console.log(query) confirms that the string is well encoded and that api_option is there and set to paste. Now, I've been searching forever on possible causes. I also tried setting the encoding on the write req.write(query, 'utf8') because the Pastebin API mentions that the POST must be UTF-8 encoded. I rewrote the thing over and over and re-consulted the Node HTTP documentation many times. I'm pretty sure I completely missed something here, because I don't see how this could fail. Does anyone have an idea of what I have done wrong?

    Read the article

  • Webserver sending corrupt or corrupting served files

    - by NotIan
    EDIT: Looks like the problem was a rootkit that corrupted a bunch of low level linux commands, including top, ps, ifconfig, netstat and others. The problem was resolved by taking all web files off the server and wiping it. A dedicated server we operate is having a strange issue. Files are not be sent complete or are showing up with garbage data. Example: http://sustainablefitness.com/images/banner_bootcamps.jpg To make matters more confusing this corruption does NOT happen when the files are served as https, (I would post a link, but I don't have enough rep points, just add an 's' after http in the link above.) When I throw load at the server, I get dozens of (swapd)s in top this is the only thing that really jumps out. I can't post images but ( imgur.com / ZArSq.png ) is a screenshot of top. I have tried a lot of stuff so far, I am willing to try anything that I can. A dedicated server we operate is having a strange issue. Files are not be sent complete or are showing up with garbage data. Example: http://sustainablefitness.com/images/banner_bootcamps.jpg To make matters more confusing this corruption does NOT happen when the files are served as https, (I would post a link, but I don't have enough rep points, just add an 's' after http in the link above.) When I throw load at the server, I get dozens of (swapd)s in top this is the only thing that really jumps out. I can't post images but ( imgur.com / ZArSq.png ) is a screenshot of top. I have tried a lot of stuff so far, I am willing to try anything that I can.

    Read the article

  • Ajax request not receiving xml from Django

    - by amougeot
    I have a Django server which handles requests to a URL which will return some HTML for use in an image gallery. I can navigate to the URL and the browser will display the HTML that is returned, but I can't get that same HTML by doing an AJAX call (using jQuery) to the same URL. This is the view that generates the response: def gallery_images(request, gallery_name): return render_to_response('galleryimages.html', {'images': get_images_of_gallery(gallery_name)}, mimetype='text/xml') This is the 'galleryimages.html' template: {% for image in images %} <div id="{{image.name}}big"> <div class="actualImage" style="background-image:url({{image.image.name}});"> <h1>{{image.caption|safe}}</h1> </div> </div> {% endfor %} This is the jQuery call I am making: $("#allImages").load("http://localhost:8000/galleryimages/Web"); However, this loads nothing into my #allImages div. I've used firebug and ran jQuery's Ajax method .get("http://localhost:8000/galleryimages/Web") and firebug says that the response text is completely empty. When I check my Django server log, this is the entry I see for when I navigate to the URL manually, through my browser: [16/Jan/2010 17:34:10] "GET /galleryimages/Web HTTP/1.1" 200 215 This is the entry in the server log for when I make the AJAX call: [16/Jan/2010 17:36:19] "OPTIONS /galleryimages/Web HTTP/1.1" 200 215 Why does the AJAX request not get the xml that my Django page is serving?

    Read the article

  • Is this the best way to make an API request using PHP CURL?

    - by Abs
    Hello all, I have a site that has a simple API which can be used via http. I wish to make use of the API and submit data about 1000-1500 times at one time. Here is their API: http://api.jum.name/ I have constructed the URL to make a submission but now I am wondering what is the best way to make these 1000-1500 API GET requests? Here is the PHP CURL implementation I was thinking of: $add = 'http://www.mysite.com/3rdparty/API/api.php?fn=post&username=test&password=tester&url=http://google.com&category=21&title=story a&content=content text&tags=Season,news'; curl_setopt ($ch, CURLOPT_URL, "$add"); curl_setopt ($ch, CURLOPT_POST, 0); curl_setopt ($ch, CURLOPT_COOKIEFILE, 'files/cookie.txt'); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $postdata = curl_exec ($ch); Shall I close the CURL connection every time I make a submission? Can I re-write the above in a better way that will make these 1000-1500 submissions quicker? Thanks all

    Read the article

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • .NET HttpListener Prefix issue with anything other than localhost

    - by jchristner
    I'm trying to use C# and HttpListener with a prefix of anything other than localhost and it fails (i.e. if I give it "server1", i.e. http://localhost:1234 works, but http://server1:1234 fails The code is... HttpListener listener = new HttpListener(); String prefix = @"http://server1:1234"; listener.Prefixes.Add(prefix); listener.Start(); The failure occurs on listener.Start() with an exception of "Access is denied.".

    Read the article

  • In WCF How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >