Search Results

Search found 28672 results on 1147 pages for 'best practise'.

Page 840/1147 | < Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >

  • Chaining jQuery animations using recursion crashes browser

    - by Rob Sobers
    Here's the basic idea of what I'm trying to do: Set the innerHTML of a DIV to some value X Animate the DIV When the animation finishes, change the value of X and repeat N times If I do this in a loop, what ends up happening is, because the animations occur asynchronously, the loop finishes and the DIV is set to its final value before the animations have had a chance to run for each value of X. As this question notes, the best way to solve this problem is to make a recursive call to the function in the callback handler for the animation. This way the value of the DIV doesn't change until the animation of the previous value is complete. This works perfectly...to a point. If I animate a bunch of these DIVs at the same time, my browser gets overwhelmed and crashes. Too much recursion. Can anyone think of a way to do this without using recursion?

    Read the article

  • Using jsAnim.js

    - by mark
    I've been trying to set up a basic test animation using jsanim.js and using their example site to set up my html, css and js. However, I just can't figure it out (not a developer...designer!) and there isn't just a simple html, css, js file to download showing how to say animate a DIV left to right. The examples of how the library works are clear but I'm lacking something in the set up and looking at their source on the site is nuts...too much going on in there. Thanks to anyone with experience with jsAnim.js http://www.jsanim.com Best, Mark

    Read the article

  • What's the correct way to hide/prevent access to wp-admin

    - by Jaypee
    I'm dealing with this matter since a while, I have read a ton of articles and stuff out there but I couldn't find a place that shows the RIGHT way, standard, correct, whatever you like to call it, to prevent access to my wp-admin or wp-login.php On all Wordpress sites I see (the well made ones) you will never see anything if you type thesite.com/wp-admin As I could see, one way to do this is by restricting the access to that folder by creating an .htaccess file and restrict by IP the access to the folder. Seems to be the "cleanest" way to do. What I'm not sure about it is that I have a dynamic address provided by my ISP, so on a certain time my IP will change, that will force me to also change the .htaccess to my new address, I don't see that practical. I can set a range also, but by doing that I will also authorize access to all people within that range of IPs (other clients of my ISP for example). I'm then struggling to find the best/standard way to do this. Anyone can help me? Thanks

    Read the article

  • How to optimize an database suggestion engine

    - by Dimitar Vouldjeff
    Hi, I`m making an online engine for item-to-item recommending movies. I have made some researches and I think that the best way to implement that is using pearson correlation and make a table with item1, item2 and correlation fields, but the problem is that after each rate of item I have to regenerate the correlation for in the worst case N records (where N is the number of items). Another think that I read is the following article, but I haven`t thought a way to implement it. So what is your suggestion to optimize this process? Or any other suggestions? Thanks.

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Will using https prevent client from accessing any confidential data inside JavaScript

    - by saveing saving
    I am working on an asp.net mvc web application, on the view i wrote the following JavaScript which calls an external web service :- <script type="text/javascript"> $(function() { $.getJSON("https://MyERPsystem.com/jw/web/json/hr/getsalary/byid?master_username=superadmin&password_hash=9449B5ABCFA9AFDA36B801351ED3DF66&employeeid=A200121", { //code goes here }, function(data) { $.each(data.items, function(i,item){ //code goes here }); }); }) </script> So if the external web service implements https, then does this means that the master_username and password_hash inside the javaScript cannot be seen by external users? Best Regards

    Read the article

  • Subsonic 3 : get only certain columns

    - by CTGA
    Hello, I use : Subsonic 3, SQL Server 2008, Json.Net Here is my problem : I have a house table with id,lot number, address, location, ownerid and an owner table with id,name, description. I'd like to get only the owner name and address of house where location is "San Francisco". How can I do this with Subsonic? My problem is I can only get a typed list or a datareader, I don't understand how to only get an array or a simple list. What is the best solution? Create a custom class ? Create a View ? Anything else ? My aim is to then serialize the data to send it back to my application, but serialization is seriously long because of the numerous relationships (My example here is simplified, there are indeed : house - owner - city - country etc.), whereas I need only 2 fields... Thank you

    Read the article

  • Should Marketing departments have basic HTML skills?

    - by Phil.Wheeler
    Working within an organisation as part of the in-house site development team, a lot of my team's throughput is driven by the colouring-in (marketing) department. It is their responsibility to provide approved content and imagery for the features or enhancements that we include on each iteration of the company site. One thing I've noticed in this job and several previous ones is that the Marketing department is extremely particular about wording and presentation, but has little to no understanding of the actual medium with which they're working - the web. I find that my team is constantly making best guesses for various HTML attributes like image alt text, titles, rel tags, blockquote cite attributes and the like. How reasonable is it to expect that marketing departments have a strong understanding of the purpose of HTML metadata? Should it be the developer's job to remind and inform each time or are marketing departments falling behind the technology they're working with? What could I reasonably expect our marketing department to understand and provide every time with each new work request?

    Read the article

  • How to catch exception on RollBack

    - by Jagd
    What is the best way to implement error handling for a SqlTransaction RollBack that already exists within a catch clause? My code is roughly like this: using (SqlConnection objSqlConn = new SqlConnection(connStr)) { objSqlConn.Open(); using (SqlTransaction objSqlTrans = objSqlConn.BeginTransaction()) { try { // code // more code // and more code } catch (Exception ex) { // What happens if RollBack() has an exception? objSqlTrans.Rollback(); throw ex; } } } I believe that my application had an exception in the try block, which in turn was caught in the catch block and then the RollBack was attempted. However, the error that I'm seeing says something about a SqlTransaction.ZombieCheck(), which is making me wonder if the RollBack() itself threw an exception as well. So, do I need to implement some type of error handling at the RollBack()? How do I do that and manage to hold on to the exception that put the execution into the catch block in the first place?

    Read the article

  • advance click counter mysql or flat file

    - by jay
    Hi, First of all Thank You for looking. whats the best method for make an advance click counter (eg. order by views [today] | [yesterday] [this week] [last week] [this month] [last month] [all time] ). Is it better to use a flat file or mysql?. This is the MYSQL Structure i came up with. id (type: int(11)) link_id (type: int(11)) date (type: date) counter (type: int(11)) please can you advice me on whats the most effective way of doing this.

    Read the article

  • Standalone jQuery "touch" method?

    - by dclowd9901
    So, I'm looking to implement the ability for a plugin I wrote to read a touch "swipe" from a touch-capable internet device, like an iPhone, iPad or android. Is there anything out there? I'm not looking for something as full as jQtouch, though was considering reverse engineering the code I would need out of it. Any suggestions on the best way to approach this? A snippet of code already available? Addendum: I realize in hindsight the solution won't strictly be jQuery, as I'm pretty sure there aren't any built-in methods to handle this. I would expect standard Javascript to find itself in the answer.

    Read the article

  • Saving WPF WIndow and Position

    - by moogs
    What would be the best way to save the window position and size in a WPF app? Currently, I'm saving the window size and position of a WPF App. Here are the events I handle: SourceInitialized : The saved info is loaded on to the window WindowClosing : The current info is saved to the backing store (I copied this from an example). The problem is, when the window is minimized and restored, the settings from the last WindowClosing is retrieved. Now, the StateChanged event fire AFTER the window has minimized, so it does not seem to be what i need. Thanks

    Read the article

  • Is there a name for the technique of using base-2 numbers to encode a list of unique options?

    - by Lunatik
    Apologies for the rather vague nature of this question, I've never been taught programming and Google is rather useless to a self-help guy like me in this case as the key words are pretty ambiguous. I am writing a couple of functions that encode and decode a list of options into a Long so they can easily be passed around the application, you know this kind of thing: 1 - Apple 2 - Orange 4 - Banana 8 - Plum etc. In this case the number 11 would represent Apple, Orange & Plum. I've got it working but I see this used all the time so assume there is a common name for the technique, and no doubt all sorts of best practice and clever algorithms that are at the moment just out of my reach.

    Read the article

  • How to initialize F# list when size is unknown, using while..do loop

    - by James Black
    I have a function that will parse the results of a DataReader, and I don't know how many items are returned, so I want to use a while..do loop to iterate over the reader, and the outcome should be a list of a certain type. (fun(reader) -> [ while reader.Read() do new CityType(Id=(reader.GetInt32 0), Name=(reader.GetString 1), StateName=(reader.GetString 2)) ]) This is what I tried, but the warning I get is: This expression should have type 'unit', but has type 'CityType'. Use 'ignore' to discard the result of the expression, or 'let' to bind the result to a name. So what is the best way to iterate over a DataReader and create a list?

    Read the article

  • .NEt on WIN to Mono on Ubuntu

    - by Srikanth
    I am looking at a possibility to change my ASP.NET 2.0 app to Mono framework. I have used the mono analyzer tool and it does detect some p/invoke and interop dependencies. For ex. 1) We use excel interops and on linux we are looking to use staroffice/Openoffice instead. Is there an easy way of substituting excel with staroffice? (I know it sounds bizarre, but just don't want to miss out in case anyone has done it already.) 2) LDAP auth: What could be the best alternative in Ubuntu (or an other flavour of Linux) ? 3) Is there an ajax framework for mono? Preferably with similar controls as Atlas?? I hope I am not too ambitious here.. thanks.

    Read the article

  • Float multiple fixed-width / varible-height boxes into 2 columns

    - by Jeremy H
    I'll try to explain this as best I can. I have multiple divs that are fixed-width but variable height. I want to float these boxes into two columns inside a fixed-width container. What happens when a give them all a float: left value, I get something like this: ######### ######### # box 1 # # box 2 # ######### # ..... # ......... # ..... # ......... ######### ######### ######### # box 3 # # box 4 # # ..... # # ..... # ######### ######### ######### ######### # box 5 # # box 6 # # ..... # ######### # ..... # ######### (The periods are white space) What I really would really like is the top of box 3 to touch the bottom of box 1. Any easy way to acheive this?

    Read the article

  • How should i build my GUI in Qt ?

    - by Apollo
    I am wondering which way is the best to start building a GUI+SOFT in Qt. I am trying to build a sound media player based on a MVC pattern. Until now i have found 3 ways to do so. 1- Should I use a .ui file thanks to Qt designer, is it flexible enough ? 2- Should I use QML to make the design than integrate it to a C++ development ? 3- Should I just start from scratch and do it by hand without Qt Designer and using Qt library ? Thank you very much for your answers.

    Read the article

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Is Work Stealing always the most appropriate user-level thread scheduling algorithm?

    - by Il-Bhima
    I've been investigating different scheduling algorithms for a thread pool I am implementing. Due to the nature of the problem I am solving I can assume that the tasks being run in parallel are independent and do not spawn any new tasks. The tasks can be of varying sizes. I went immediately for the most popular scheduling algorithm "work stealing" using lock-free deques for the local job queues, and I am relatively happy with this approach. However I'm wondering whether there are any common cases where work-stealing is not the best approach. For this particular problem I have a good estimate of the size of each individual task. Work-stealing does not make use of this information and I'm wondering if there is any scheduler which will give better load-balancing than work-stealing with this information (obviously with the same efficiency). NB. This question ties up with a previous question.

    Read the article

  • C# PropertyGrid drag drop

    - by gametheoryonline
    I'm trying to implement drag/drop support to a propertygrid in C# using VS2005 (.NET 2.0). The propertygrid can handle the dragenter etc. events, but there doesn't seem to be a way to get the griditem under the pointer during a drag event. The best I've been able to get so far is to use the selectedgriditem property to retrieve a custom propertydescriptor and set the value, but this requires a grid item to already be selected before starting the drag/drop operation. Has anyone had any luck with implementing this? Thanks :-)

    Read the article

  • Approaches for Error Code/Message Management in .NET

    - by WayneC
    Looking for suggestions/best practices on managing error codes and messages in a multi-tiered applications. Specifically things like: Where should error codes be defined? Enum? Class? How are error messages or further details associated with the error codes? resource files? attributes on enum values, etc.? If you have a multi-tier application consisting of DAL, BLL, UI, and Common projects for example, should there be a single giant list of codes for all tiers, or are the codes extensible by project/tier? Update: Important to mention that I can't rely solely on Exceptions and custom Exception types for error reporting, as some clients for this application will be via web services (SOAP & REST) Any suggestions welcome!

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Move websites from IIS7 to IIS 7.5

    - by Adam Winter
    Can anyone suggest the best way of moving websites on server1 with IIS7 to server2 with IIS 7.5 on it? I've read some articles which suggest copying the applicationHost.config file while preserving the configProtectedData node, but I'm concerned there may be settings in the IIS 7.5 config that don't exist in the current IIS7 config which would be lost. I've also seen suggestions of moving each site individually by using a command like this: AppCmd.exe LIST SITE "My Site" /config /XML mysite.xml This method just takes too long to do this for dozens of sites. There must be a better way of moving all the sites at once to the new platform.

    Read the article

  • Improve performance of website

    - by Vinodtiru
    Hi, I have designed a new web site. I have hosted it online. I want it to be of the best performance and load pages faster. This website is designed in php 5.0+ using codeigniter. This is using mysql as DB. I have images on it. I am using Nitobi grid for displaying set of records on page. The rest is everything normal page controls. As i am not so very experienced with website performance factors i would like to get suggestions and details on factors that can improve performance of website. Please let me know how i can improve my performance. Also please let me know if there are any ways to measure the performance of website and also any websites or tools to help test the performance. Any kind of help is appreciated. Thanks in advance. Thanks and Regards Vinod T.

    Read the article

  • Web services Authentication Jungle

    - by redben
    I have been doing some research lately about best approaches to authenticating web services calls (REST SOAP or whatever). But none of the Approaches convinced me... But i still can't a make a choise... Some talk about SSL and http basic authentication -login/password- which just seems weird for a machine (i mean having to assign a login/password to a machine, or is it not ?). Some others say API keys (seems like these scheme is more used for tracking and not realy for securing). Some say tokens (like session IDs) but shouldn't we stay stateless (especially if in REST style) ? In my use case, when a remote app is calling one of our web services, i have to authenticate the calling application obviously, and the call must - if applicable - tell me which user it impersonates so i can deal with authorization later. Any thoughts ?

    Read the article

< Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >