Search Results

Search found 51708 results on 2069 pages for 'http caching'.

Page 129/2069 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • expiring image assets referenced from stylesheets

    - by crankharder
    So rails appends timestamps to CSS, JS and image files: image_tag 'foo.png' => <img src="foo.png?123123123123' /> # or somethin like that ...which is really useful for doing far-future expiration, etc. with Apache's help. But what about images referenced from stylesheets? They don't get an appended timestamp. So it seems to me that it's entirely possible to update one of those images, redeploy, and then not see the file change because the browser doesn't think it's been updated. Unless I'm missing something. If I'm not, is there a decent solution to this problem?

    Read the article

  • What is the best way to read GetResponseStream() ?

    - by Dev Dona
    What is the best way to read an HTTP response from GetResponseStream ? Currently I'm using the following approach. Using SReader As StreamReader = New StreamReader(HttpRes.GetResponseStream) SourceCode = SReader.ReadToEnd() End Using I'm not quite sure if this is the most effecient way to read an http response. I need the output as string, I've seen an article ( http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=583 ) with a different approach but I'm not quite if it's a good one. And in my tests that code had some encoding issues with in different websites. How do you read web responses?

    Read the article

  • .htaccess Problem

    - by ocergynohtna
    I'm having trouble with redirecting urls using the .htaccess file. This is how my htaccess file looks like: Redirect 301 /file-name/example.php http://www.mysite.com/file-name/example-001.php Redirect 301 /section-name/example.php http://www.my-site.com/section-name/example-002.php RewriteEngine on RewriteCond %{HTTP_HOST} !^www.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+)/(.*)$ hqtemplates/articles.php?file_name=$2 [L] php_value session.use_only_cookies 1 php_value session.use_trans_sid 0 Now the problem is that when I go to page: www.my-site.com/file-name/example.php instead of redirecting me to www.my-site.com/file-name/example-001.php it redirects me to www.my-site.com/file-name/example.php?file_name=example-001.php. For some reason it adds "?file_name=example-001.php" to the url. Anyone know's why this is happening and how to fix this?

    Read the article

  • Hibernate query cache automatically refreshed on external update?

    - by artgon
    I'm creating a service that has read-only access to the database. I have a query cache and a second level cache enabled (READ_ONLY mode) in Hibernate to speed up the service, as the tables being accessed change rarely. My question is, if someone goes into the DB and changes the tables manually (i.e. outside of Hibernate), does the cache recognize automatically that it needs to be cleared? Is there a time limit on the cache?

    Read the article

  • Hibernate + Spring + session + cache

    - by andromeda
    We are using Hibernate with Spring for our Java application. We find out that when a session update something in database other sessions can not see the update. For example user1 get the account balance from database then user2 increase the balance , if user1 get the object another time he see the account balance before updating (seems that session use the value from its cache) but we want to get the updated object with new account balance. User1 use one session during all activity that is different from user2 session. Is any configuration to force to get the updated object from database? or any other help?

    Read the article

  • Multithreaded java cache for objects that are heavy to create ?

    - by krosenvold
    I need a cache some objects with fairly heavy creation times, and I need exactly-once creation semantics. It should be possible to create objects for different CacheKeys concurrently. I think I need something that (under the hood) does something like this: ConcurrentHashMap<CacheKey, Future<HeavyObject>> Are there any existing open-source implementations of this that I can re-use ?

    Read the article

  • C#: How to implement a smart cache

    - by Svish
    I have some places where implementing some sort of cache might be useful. For example in cases of doing resource lookups based on custom strings, finding names of properties using reflection, or to have only one PropertyChangedEventArgs per property name. A simple example of the last one: public static class Cache { private static Dictionary<string, PropertyChangedEventArgs> cache; static Cache() { cache = new Dictionary<string, PropertyChangedEventArgs>(); } public static PropertyChangedEventArgs GetPropertyChangedEventArgsa(string propertyName) { if (cache.ContainsKey(propertyName)) return cache[propertyName]; return cache[propertyName] = new PropertyChangedEventArgs(propertyName); } } But, will this work well? For example if we had a whole load of different propertyNames, that would mean we would end up with a huge cache sitting there never being garbage collected or anything. I'm imagining if what is cached are larger values and if the application is a long-running one, this might end up as kind of a problem... or what do you think? How should a good cache be implemented? Is this one good enough for most purposes? Any examples of some nice cache implementations that are not too hard to understand or way too complex to implement?

    Read the article

  • ASP.NET - How can I cache user details for the duration of their visit?

    - by rockinthesixstring
    I've built a Repository that gets user details Public Function GetUserByOpenID(ByVal openid As String) As User Implements IUserRepository.GetUserByOpenID Dim user = (From u In dc.Users Where u.OpenID = openid Select u).FirstOrDefault Return user End Function And I'd like to be able to pull those details down IF the user is logged in AND IF the cached data is null. What is the best way to create a User object that contains all of the users details, and persist it across the entire site for the duration of their visit? I Was trying this in my Global.asax, but I'm not really happy using Session variables. I'd rather have a single object with all the details inside. Private Sub BaseGlobal_AcquireRequestState(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.AcquireRequestState If Session("UserName") Is Nothing AndAlso User.Identity.IsAuthenticated Then Dim repo As UrbanNow.Core.IUserRepository = New UrbanNow.Core.UserRepository Dim _user As New UrbanNow.Core.User _user = repo.GetUserByOpenID(User.Identity.Name) Session("UserName") = _user.UserName() Session("UserID") = _user.ID End If End Sub

    Read the article

  • Memcached getDelayed alternative implementation

    - by iBobo
    I would like to use getDelayed on the PHP Memcached extension but I think it's not implemented in the right way. Right now you ask for some keys and then retrieve all of them with fetch() and fetchAll(). But imagine a scenario where I need to retrieve 15 keys used in different parts of the page which I don't know in advance, but I can ask the various objects to give me the list. What I want is give the Memcached instance this list (each component would give its part) then later when I need them retrieve from the instance, but not all of them at once: each component would take the one it needs. Basically if I were to implement this I would prohibit using getDelayed alone and implement a bookGet($keys) method where you would add the keys to book (which actually calls getDelayed), and redefine get to handle these three cases: key is booked and retrieved - return the value; key is booked but not retrieved - go and force the fetch of the booked keys and return the correct value; key not booked - do a normal lookup. I want to know if this makes sense, your thoughts on the subject and if someone already implemented this or maybe PECL Memcached already works this way and actually the documentation doesn't explain it correctly.

    Read the article

  • Setting cookie for site in http and https under different subdomains in PHP

    - by nilacqua
    Situation: 1. I'm trying run an https store(xcart) under one domain secure.mydomain and I want to have access to a cookie it sets in http www.mydomain 2. I'm Running PHP on apache(MAMP), testing in firefox with firecookie 3. The existing code sets cookies to .secure.mydomain. I'm not sure if this is xcart related, but setcookie is actually called using secure.mydomain. I'm not sure why the "." is appended. Problems: 1. When I try to use setcookie in https to use the domain .mydomain or just mydomain, no cookie is created, whether I'm running the store under http or https. The testing code I'm using is: setcookie('three', 'two', 0, "/", ".mydomain"); If I set the cookie to secure.mydomain or .secure.mydomain it does show up. Is there a reason the cookie isn't showing up?

    Read the article

  • Facebook iframe app redirecting https to http, how?

    - by Paul Whitrow
    I'm trying to get an app working within Facebook, but it seems that no matter what I try including forcing just https in the app settings (see screen shot), the iframe source (Facebooks canvas) seems to change the https address to http (301) which is then producing SEC7111: HTTPS security errors in IE? (sorry I can't post screen shots or extra links yet:( ) Header dump of page in question: Request URL:https://[hidden] Request Method:POST Status Code:301 Moved Permanently Request Headers (13) Form Data (1) Response Headersview source Connection:keep-alive Content-Encoding:gzip Content-Length:253 Content-Type:text/html; charset=iso-8859-1 Date:Mon, 01 Jul 2013 09:42:32 GMT Location:http://[hidden] Server:Apache/2.2.22 Vary:Accept-Encoding I'm getting so confused by this, and would welcome any help that the community could offer.

    Read the article

  • Rails gives wrong headers after upgrade 2.3.5 -> 2.3.8

    - by macsniper
    I just upgraded from rails 2.3.5 to rails 2.3.8, but now my redirects are not working properly. I get the following as the response HTTP Headers: HTTP/1.1 302 Moved Temporarily Date: Wed, 02 Jun 2010 09:40:39 GMT Content-Length: 93 Content-Type: text/html whereas I got previous: HTTP/1.1 302 Moved Temporarily Connection: close Date: Wed, 02 Jun 2010 09:41:18 GMT Set-Cookie: _session_id=<correct id>; path=/ Status: 302 Found Location: <correct url> Cache-Control: no-cache Server: Mongrel 1.1.5 Content-Type: text/html; charset=utf-8 Content-Length: 93 Anyone knows how to fix this? Despite the fact that the redirect is not working, the login-cookie is not set too (but I think, this is both related somehow). I have already tried to override redirect_to in order to set response.headers['Location'] etc., but they did not appear in the response.

    Read the article

  • Sending a file from memory (rather than disk) over HTTP using libcurl

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • Is it possible to cache an asp page on the server side?

    - by SLC
    Let's assume you have a big complex index page, that shows news articles and stuff. It's not going to change very often. Can you cache it somehow on the serverside, so requests don't force to server to dynamically generate the entire page every time someone visits it? Or does ASP.NET do this automatically? If so, how does it know if something has changed?

    Read the article

  • Memcached: booking a fetch

    - by iBobo
    I would like to use getDelayed on the PHP Memcached extension but I think it's not implemented in the right way. Right now you ask for some keys and then retrieve all of them with fetch() and fetchAll(). But imagine a scenario where I need to retrieve 15 keys used in different parts of the page which I don't know in advance, but I can ask the various objects to give me the list. What I want is give the Memcached instance this list (each component would give its part) then later when I need them retrieve from the instance, but not all of them at once: each component would take the one it needs. Basically if I were to implement this I would prohibit using getDelayed alone and implement a bookGet($keys) method where you would add the keys to book (which actually calls getDelayed), and redefine get to handle these three cases: key is booked and retrieved - return the value; key is booked but not retrieved - go and force the fetch of the booked keys and return the correct value; key not booked - do a normal lookup. I want to know if this makes sense, your thoughts on the subject and if someone already implemented this or maybe PECL Memcached already works this way and actually the documentation doesn't explain it correctly.

    Read the article

  • Review of a locked part of an function in C#.NET

    - by Lieven Cardoen
    Is this piece of code where I lock a part of the function correct? Or can it have use drawbacks when multiple sessions ask concurrently for the same Exam? Purpose is that client that first asks for the Exam will assemble it, all next clients will get the cached version. public Exam GetExamByExamDto(ExamDTO examDto, int languageId) { Log.Warn("GetExamByExamDto"); lock (LockString) { if (!ContainsExam(examDto.id, languageId)) { Log.Warn("Assembling ExamDto"); var examAssembler = new ExamAssembler(); var exam = examAssembler.createExam(examDto); if (AddToCache(exam)) { _examDictionary.Add(examDto.id + "_" + languageId, exam); } Log.Warn("Returning non cached ExamDto"); return exam; } } Log.Warn("Returning cached ExamDto"); return _examDictionary[examDto.id + "_" + languageId]; } I have a feeling that this isn't the way to do it.

    Read the article

  • How do I cache jQuery selections?

    - by David
    I need to cache about 100 different selections for animating. The following is sample code. Is there a syntax problem in the second sample? If this isn't the way to cache selections, it's certainly the most popular on the interwebs. So, what am I missing? note: p in the $.path.bezier(p) below is a correctly declared object passed to jQuery.path.bezier (awesome animation library, by the way) This works $(document).ready(function() { animate1(); animate2(); }) function animate1() { $('#image1').animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate1()", 3000); } function animate2() { $('#image2').animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate2()", 3000); } This doesn't work var $one = $('#image1'); //problem with syntax here?? var $two = $('#image2'); $(document).ready(function() { animate1(); animate2(); }) function animate1() { $one.animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate1()", 3000); } function animate2() { $two.animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate2()", 3000); }

    Read the article

  • Best method to cache objects in PHP?

    - by Martin Bean
    Hi, I'm currently developing a large site that handles user registrations. A social networking website for argument's sake. However, I've noticed a lag in page loads and deciphered that it is the creation of objects on pages that's slowing things down. For example, I have a Member object, that when instantiated with an ID passed as a construct parameter, it queries the database for that members' row in the members database table. Not bad, but this is created each time a page is loaded; and called more than once when say, calling an array of that particular members' friends, as a new Member object is created for each friend. So on a single page I can have upwards of seven of the same object, but containing different properties. What I'm wanting to do is to find a way to reduce the database load, and to allow persist objects between page loads. For example, the logged in user's object to be created on login (which I can do) but then stored somewhere for retrieval so I don't have to keep re-creating the object between page loads. What is the best solution for this? I've had a look at Memcache, but with it being a third-party module I can't have the web host install it on this occasion. What are my alternatives, and/or best practices in my case? Thanks in advance.

    Read the article

  • server not sending custom header values

    - by egza
    I'm using PHP 5.2.17 to get a remote page, the HTTP requests contains some cookie values but cookies are not delivered to the destination page. $url = 'http://somesite.com/'; $opts = array( 'http' => array ( 'header' => array("Cookie: field1=value1; field2=value2\r\n") ) ); $context = stream_context_create($opts); echo file_get_contents($url, false, $context); Can you help me find the problem? Note: I can't use curl. Thanks.

    Read the article

  • nm-applet gone?

    - by welp
    nm-applet seems to have disappeared from my system. I am running 12.10. Here's what I get when I check my package manager logs: ? ~ grep network-manager /var/log/dpkg.log 2012-10-06 10:37:08 upgrade network-manager-gnome:amd64 0.9.6.2-0ubuntu5 0.9.6.2-0ubuntu6 2012-10-06 10:37:08 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 remove network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 install network-manager-gnome:amd64 0.9.6.2-0ubuntu6 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 ? ~ Unfortunately, I cannot find network-manager-applet package at all: ? ~ apt-cache search network-manager-applet ? ~ Here are the contents of /etc/apt/sources.list: ? ~ cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu quantal-security main restricted deb-src http://security.ubuntu.com/ubuntu quantal-security main restricted deb http://security.ubuntu.com/ubuntu quantal-security universe deb-src http://security.ubuntu.com/ubuntu quantal-security universe deb http://security.ubuntu.com/ubuntu quantal-security multiverse deb-src http://security.ubuntu.com/ubuntu quantal-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu quantal main deb-src http://extras.ubuntu.com/ubuntu quantal main ? ~ Right now, I can't think of anything else. Happy to provide more info upon request.

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >