Search Results

Search found 21091 results on 844 pages for 'dsl client'.

Page 452/844 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Cloudcel: Excel Meets the Cloud

    - by kaleidoscope
    Cloudscale  is launching Cloudcel Cloudcel is the first product that demonstrates the full power of integrated "Client-plus-Cloud" computing. You use desktop Excel in the normal way, but can also now seamlessly tap into the scalability and massive parallelism of the cloud, entirely from within Excel, to handle your Big Data. Building an app in Cloudcel is really easy – no databases, no programming. Simply drop building blocks onto the spreadsheet (in any order, in any location) and launch the app to the cloud with a single click. Parallelism, scalability and fault tolerance are automatic. With Cloudcel, you can process realtime data streams continuously, and get alerts pushed to you as soon as important events or patterns are detected ("Set it and forget it"). Cloudcel is offered as a pay-per-use cloud service – so no hardware, no software licenses, and no IT department required to set it up. Private cloud deployments are also available. Please find below link for more detail : http://billmccoll.sys-con.com/node/1326645 http://cloudcel.com/ Technorati Tags: Tanu

    Read the article

  • Trying to Integrate Five9 and Apptivio [on hold]

    - by David Mitchell
    Five9 is a calling system application and Apptivio will be used to store client information for purchased products. Specifically what I need is an example code that will allow me to access Five9's CRM system using the access key and transfer a persons first and last name, for example, to Apptivio. The issue is I have never dealt with this type of system and I cannot find any information for it other than the Web2Campaign that was sent to me by Five9. Lets say this is the code from Five9 F9key=first_name&F9key=last_name&first_name=jon&last_name=smith Once this information is placed into Five9 I must update Apptivio with this information. I am lost as to how to send this information to Apptivio.

    Read the article

  • Web Page Execution Internals

    - by octopusgrabbus
    My question is what is the subject area that covers web page execution/loading. I am looking to purchase a book by subject area that covers when things execute or load in a web page, whether it's straight html, html and Javascript, or a PHP page. Is that topic covered by a detailed html book, or should I expect to find information like that in a JavaScript of PHP book? I understand that PHP and Perl execute on the server and that Javascript is client side, and I know there is a lot of on-line documentation describing <html>, <head>, <body>, and so on. I'm just wondering what subject area a book would be in to cover all that, not a discussion of the best book or someone's favorite book, but the subject area.

    Read the article

  • Looking for bug tracking software

    - by Shelton
    I'm looking for a bug/issue tracking system that can: Integrate with lots of other services. Basecamp, Beanstalk, etc. Integrate popular CMSs, such as WordPress, so the client can enter a ticket from the system that is familiar to them and not have one more login to worry about. Generate reports for my own purposes. Bonus if there's an iPhone app. Doesn't require additional development on my end (I have plenty of money and no time). I've already looked into Lighthouse and ZenDesk -- both are solid offerings -- but don't see what I need out of the box. I'd have to build CMS plug-ins. And I've looked through WP plug-ins for bug tracking software, but nothing I've found integrates with these products. Anyone know of something that meets these requirements without additional development, or am I stuck putting my business on hold to get this piece in place myself?

    Read the article

  • Meta Refresh for change of page name and content

    - by user3507399
    Hopefully just a quick one. I've got a client that is changing the name of a workshop that they run. This means a change of url, page title for keywords that they have first page ranking on. The keywords are still relevant so what I want to avoid is a 301 redirect to a page that has different keywords to the previous page. Is the best option to keep the old page live with url and title and use a meta refresh to redirect after a period of time (not instant)? That way the SEO ranking is retained for the previous workshop name while they work on the ranking for the name change? Would a 301 redirect have an inverse effect? Thanks!

    Read the article

  • Google : nouvelles applications Gmail pour Android et iOS et nouveau Youtube pour les iDevices d'Apple

    Gmail : nouvelles applications pour Android et iOS Et nouveau Youtube pour les iDevices d'Apple La fragmentation touche également les applications de Google pour son propre OS mobile. Son nouveau Gmail pour Android - qui vient de sortir - n'est en effet disponible que pour les versions 4.0 et ultérieures (aujourd'hui la 4.1) du système. Parmi les améliorations du client de messagerie mobile, on notera pêle-mêle : un nouvel aperçu des photos dans les mails, la possibilité de mettre une vidéo ou une photo directement en pièce jointe, l'ajustement automatique de la taille de la police à l'écran ou le glisser des messages vers la droite pour les trier ou les supprimer. ...

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Looking for the better way to combine deep architecture refactoring with feature based development

    - by voroninp
    Problem statement: Given: TFS as Source Control Heavy desktop client application with tons of legacy code with bad or almost absent architecture design. Clients constantly requiring new features with sound quality, fast delivery and constantly complaining on user unfriendly UI. Problem: Application undoubtedly requires deep refactoring. This process inevitably makes application unstable and dedicated stabilization phase is needed. We've tried: Refactoring in master with periodical merges from master (MB) to feature branch (FB). (my mistake) Result: Many unstable branches. What we are advised: Create additional branch for refactoring (RB) periodically synchronizing it with MB via merge from MB to RB. After RB is stabilized we substitute master with RB and create new branch for further refactoring. This is the plan. But here I expect the real hell of merging MB to RB after merging any FB to MB. The main advantage: Stable master most of the time. Are there any better alternatives to the procees?

    Read the article

  • Speed up ssh login using public key down to 0.1sec

    - by BarsMonster
    Hi! I am using Putty to login to my local server, but it takes about 1.5 seconds to login (from the click on 'connect' to working command prompt, most of time is spend on "Authenticating with public key..."). I know many see even slower speeds, but I would like to have not more than 0.1 login time. I already set UseDNS=no and allowed only IPv4 in putty client and reduced key length from 4k down to 1k. Any other suggestions to speed it further?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • MVC 2 in 2 Minutes!

    - by Steve Michelotti
    In a couple of recent Code Camps, I’ve given my presentation: Top 10 Ways MVC 2 Will Boost Your Productivity. In the presentation, I cover all major new features introduced in MVC 2 with a focus on productivity enhancements. To drive the point home, I conclude with a final demo where I build a couple of screens from scratch highlighting many (but not all) of the features previously covered in the talk. A couple of weeks ago, I was asked to make it available online so here it is. In 2 minutes the demo builds a couple screens from scratch that provide a goal setting tracker for a user. MVC 2 features included in the video are: Template Helpers / Editor Templates Server-side/Client-side Validation Model Metadata for View Model HTML Encoding Syntax Dependency Injection Abstract Controllers Custom T4 Templates Custom MVC Visual Studio 2010 Code Snippets The complete code samples and slide deck can be downloaded here: Top 10 Ways MVC 2 Will Boost Your Productivity. Enjoy! (Right-click and Zoom to view in full screen)   Click here for Direct link to video

    Read the article

  • Syncing properties across a game server

    - by Vaughan Hilts
    I'm beginning to implement a simple scripting system into my networked server, and I've hit a snag. Before, I've been wrapping my calls into functions on objects that manipulate objects, but lately I've been finding this to be a pain for simple things. For example, if I set 'player.HP = 1'.. this works server-side. But the player side never sees this change unless I explicitly send a packet to inform the client. For many things like map swapping that require more complicated changes, like change X, Y, Map and do this.. I have a function. That's fine. But what about these small properties I want to sync?

    Read the article

  • How can IIS 7.5 have the error pages for a site reset to the default configuration?

    - by Sn3akyP3t3
    A mishap occurred with web.config to accommodate a subsite existing. I made use of “<location path="." inheritInChildApplications="false">”. Essentially it was a workaround put in place for nested web.config files which was causing a conflict. The result was that error pages were not being handled properly. Error 500 was being passed to the client for every type of error encountered. Removal of the offending inheritInChildApplications tag from the root web.config restored normal operations of most of the error handling, but for some reason error 503 is a correct response header, but the IIS server is performing the custom actions for error 403.4 which is a redirect to https. I'm looking to restore defaults for error pages so that the behavior once again is restored. I then can re-add customizations for the error pages.

    Read the article

  • Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    This document contains step-by-step instructions for installing and testing the Microsoft Business Intelligence infrastructure based on SQL Server 2012 and SharePoint 2010, focused on SQL Server 2012 Reporting Services with Power View. This document describes how to completely install the following scenarios: a standalone instance of SharePoint and Power View with all required components; a new SharePoint farm with the Power View infrastructure; a server with the Power View infrastructure joined to an existing SharePoint farm; installation on a separate computer of client tools; installation of a tabular instance of Analysis Services on a separate instance; and configuration of single sign-on access for double-hop scenarios with and without Kerberos. Scripts are provided for all/most scenarios.

    Read the article

  • Ping one remote server from another remote server

    - by user666254
    It's simple to ping a server in C#, but suppose I have servers A, B and C. A connects to B. A asks B to ping C, to check that B can talk to C. A needs to read the outcome. Now, first of all is this possible without installing an application onto B? In other words, can I perform the entire check from just running a program on A? If so, can anyone suggest the route I would take to achieve this? I've looked at sockets but from the examples I've seen these require a client AND server application to function.

    Read the article

  • Configuring SQL Server Express Edition for remote access

    - by rohancragg
    Originally posted on: http://geekswithblogs.net/rohancragg/archive/2013/07/24/configuring-sql-server-express-edition-for-remote-access.aspxI wanted to access SQL Express on my local machine from within a Client Hyper=V virtual machine on the same Domain. This article got me most of the way there: http://akawn.com/blog/2012/01/configuring-sql-server-2008-r2-express-edition-for-remote-access/ But it was a bit out of date. My steps were: Enable TCP/IP Protocol in SNAC Restart SQL Server Configure (Windows 8) Firewall to allow all Inbound for sqlservr.exe Footnote: I thought this might be relevant (nice to be able to script it): http://support.microsoft.com/kb/968872/en-us But the problem is that this is for fixed ports and not compatible with the (default) Dynamic Ports settings above.

    Read the article

  • Multicast in Ubuntu

    - by iwant2learn
    Can some one please explain the steps required to configure a multicast in Ubuntu? I have a simple program taken from Internet. I get errors when I execute the client program. I get the error as: Opening datagram socket....OK. Setting SO_REUSEADDR...OK. Binding datagram socket...OK. Adding multicast group error: No such device: When I execute the server program I get the error as: Opening the datagram socket...OK. Setting local interface error: Cannot assign requested address I am using Ubuntu to run the program. I have two different laptops but connected via the same network. I am using wireless network to perform the above operation.

    Read the article

  • What source control to use for my private gaming server?h

    - by crosenblum
    It has sql server components, client launcher, server software. I want to use an online resource where people can make updates, and make it easier to roll out any changes to players. Most of the files are just text files, or gtx image files. I don't think this qualifies as open source, so I don't know what to do. I tried github, and have a free account there, but it was really clunky, mass adding every file to be comitted. I really dont' like subversion but if that's the best option, i'll use it. The other people who will need access to the files will have no familiarity with any kind of source control, so I need an easy system for them to download files, make changes, and comit to the repository. Any suggestions?

    Read the article

  • What is the best practice for website design and markup now that mobile browsers are common?

    - by Jonathan Drain
    Back in 2008, smartphones were a small market and it was commonplace for sites to be designed for a fixed width - say, 900px or 960px - with the page centered if the browser window was larger. Many designers said fluid width was better, but since user screens typically varied between 1024x768 and 1920x1080, fluid width allowed longer line length than is optimal for ease of reading, and so many sites (including Stack Exchange) use fixed width. Now that mobile devices are common, what is the the best approach to support both desktop and mobile browsers? Establish a separate mobile site (e.g: mobile.example.com) Serve a different CSS to mobile devices; if so how? Server-side browser sniffing, or a @media rule? Use Javascript or something to adapt the website dynamically to the client? Should all websites be expected to be responsive? Some kind of fluid layout Something else?

    Read the article

  • How to support tableless columns with WYSIWYG editor?

    - by Andy
    On the front page of a site I'm working on there's a small slideshow. It's not for pictures in particular, any content can go in, and I'm currently setting up the editing interface for the client. I'd like to be able to have one/two/more columns in the editable area, and ideally that would be via CSS - does anyone know of a WYSIWYG editor that supports this? I'm using Drupal (would prefer not to involve Panels as it would require a bit of work to make it a streamlined workflow for content entry) in case that matters to anyone. To start the ball rolling, one way would be to use templates. I know CKEditor supports templates, and it looks like TinyMCE might have something similar. I don't know how well these work with tableless columns (the CKEditor homepage demo uses tables to achieve its two column effect). Holding out for a cool solution!

    Read the article

  • jQuery/AJAX on old Computers/Browsers

    - by Andresch Serj
    I am working on a plattform that will have a lot of users in the so called "developing countries". So many of them will be using old computers and old browsers in tiny internet cafes. We want to make sure to give them a good user Experience and make sure the website loads as fast as possible. Problem is, that while you can save a lot of requeasts and time, using jQuery/AJAX, it also brings along a lot of Problems: - Will the Computers be powerfull enough to deal with the client side scripts? - Will the old Browsers handle jQuery? Does anyone have any experience with these sort of problems or might know of some sort of article on the topic?

    Read the article

  • Options for Application Registry

    - by Matt Felzani
    I work for a small software company (about 200 people building 8-10 applications) and I was hoping to get some advice on products that might be out there to manage the information of which clients are using which versions of our products? The most fundamental relationship would be that a "product" has "versions" and a given "version" is used by a "client." Uses would be: Determine which clients use which products Determine which clients are on which versions of a product Determine which clients are exposed to which vulnerabilities because of the version they use Determine which clients cannot move to a new version because of a vulnerability in the new version that they may hit Determine which clients should be approached for an upgrade Any thoughts or product reviews would be greatly appreciated! Thanks in advance.

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >