Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 553/659 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Overlay an HTML page with an HTML form

    - by jah
    Hi folks, this is a question about the best way (or least effort of the best ways) to overlay an html page with a form. Best in this context meaning best user experience whilst meeting the functional requirements. Let's say I have a page with a short form on it; the user has to enter some financial details. To assist the user to enter an accurate value for one of the fields there's another, much longer form. The longer form needs to be displayed only if the user requests the help. For users without javascript, clicking a link will submit the short form (persisting already filled fields in a session) and the server will respond with the long form. They'll submit the long form and the server will combine the submitted data with the persisted data and serve the short form again - with the fields populated. For users with javascript I want to overlay the short form page (in a lightbox stylee) with the long form, allow them to populate the long form and then go back to the short form with less round-trips to the server. Do I: Overlay the short form page with an iframe whose target is the long form? Request the long form over ajax and stuff it into a div? Generate the long form entirely on the client-side? Some other wizadry I haven't thought of? A short explanation of the best mechanism will do me very nicely indeed. Thank you very much!

    Read the article

  • Elmah on a WebServer

    - by Subbarao
    I Setup Elmah to work on a website. It works fine on my local machine but when I moved it to a web server I get this exception Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized configuration section elmah/security. Source Error: Line 110: </connectionStrings> Line 111: <elmah> Line 112: <security allowRemoteAccess="1" /> Line 113: <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="CadaretGrantConnectionString"/> Line 114: <!-- Don't log 404 --> It shows me a error at Line 112. What should be done in order to get Elmah to work with remote access? Below is my configuration <elmah> <security allowRemoteAccess="1" /> <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ConnectionString"/> <!-- Don't log 404 --> <errorFilter> <test> <equal binding="HttpStatusCode" value="404" valueType="Int32"/> </test> </errorFilter> </elmah>

    Read the article

  • IE9 selectAllChildren on an out-of-view element

    - by MrSlayer
    I am trying to replicate a service that is provided by Tynt.com that appends some text to a user's selection when copying. I understand that users don't particularly like this, but it is a client's request to append the URL and copyright notice whenever a user copies something from their website. In current browsers, I am able to do this by creating a DOM element, adding the selected text, appending the copyright text and then selecting the new node: var newSelection = document.createElement( 'div' ); newSelection.style.cssText = "height: 1px; width: 1px; overflow: hidden;"; if( window.getSelection ) { var selection = window.getSelection( ); if( selection.getRangeAt ) { var range = selection.getRangeAt( 0 ); newSelection.appendChild( range.cloneContents( ) ); appendCopyright( ); document.body.appendChild( newSelection ); selection.selectAllChildren( newSelection ); // ... remove element, return selection } } In IE9, this errors out on the selection.selectAllChildren( newSelection ) statement and I was able to figure out that this is because newSelection was effectively "hidden" from the viewport due to the styles applied in the second line above. Commenting that out works, but obviously the new node is shown to the end user. It appears that this was resolved in later versions of IE, but I am having trouble coming up with a workaround that is sufficient for IE9, a browser that I need to support. I've tried a variety of alternatives, like setting visibility: hidden;, positioning it off-screen, and trying some alternative selection functions, but they each present different problems. The error thrown by IE is: SCRIPT16389: Unspecified error.

    Read the article

  • Best way to manipulate and compare strings

    - by vtortola
    I'm developing a REST service, so a request could be something like this: /Data/Main/Table=Customers/ I need to get the segments one by one, and for each segment I will decide wich object I'm going to use, after I'll pass to that object the rest of the query so it can decide what to do next. Basically, the REST query is a path on a tree :P This imply lots String operations (depending on the query complexity), but StringBuilder is useful just for concatenations and remove, you cannot perform a search with IndexOf or similar. I've developed this class that fulfill my requirement, but the problem is that is manipulating Strings, so every time I get one segment ... I'll create extra Strings because String is an inmutable data type: public class RESTQueryParser { String _query; public RESTQueryParser(String query) { _query = query; } public String GetNext() { String result = String.Empty; Int32 startPosition = _query.StartsWith("/", StringComparison.InvariantCultureIgnoreCase) ? 1 : 0; Int32 i = _query.IndexOf("/", startPosition, StringComparison.InvariantCultureIgnoreCase) - 1; if (!String.IsNullOrEmpty(_query)) { if (i < 0) { result = _query.Substring(startPosition, _query.Length - 1); _query = String.Empty; } else { result = _query.Substring(startPosition, i); _query = _query.Remove(0, i + 1); } } return result; } } The server should support a lot of calls, and the queries could be huge, so this is gonna be a very repetitive task. I don't really know how big is the impact on the memory and the performance, I've just readed about it in some books. Should I implement a class that manage a Char[] instead Strings and implement the methods that I want? Or should be ok with this one? Regular expressions maybe?

    Read the article

  • Preventing iframe caching in browser

    - by Zarjay
    How do you prevent Firefox and Safari from caching iframe content? I have a simple webpage with an iframe to a page on a different site. Both the outer page and the inner page have HTTP response headers to prevent caching. When I click the "back" button in the browser, the outer page works properly, but no matter what, the browser always retrieves a cache of the iframed page. IE works just fine, but Firefox and Safari are giving me trouble. My webpage looks something like this: <html> <head><!-- stuff --></head> <body> <!-- stuff --> <iframe src="webpage2.html?var=xxx" /> <!-- stuff --> </body> </html> The var variable always changes. Despite the fact that the URL of the iframe has changed (and thus, the browser should be making a new request to that page), the browser just fetches the cached content. I've examined the HTTP requests and responses going back and forth, and I noticed that even if the outer page contains <iframe src="webpage2.html?var=222" />, the browser will still fetch webpage2.html?var=111. Here's what I've tried so far: Changing iframe URL with random var value Adding Expires, Cache-Control, and Pragma headers to outer webpage Adding Expires, Cache-Control, and Pragma headers to inner webpage I'm unable to do any JavaScript tricks because I'm blocked by the same-origin policy. I'm running out of ideas. Does anyone know how to stop the browser from caching the iframed content?

    Read the article

  • Having trouble with extension methods for byte arrays

    - by Dave
    I'm working with a device that sends back an image, and when I request an image, there is some undocumented information that comes before the image data. I was only able to realize this by looking through the binary data and identifying the image header information inside. I've been able to make everything work fine by writing a method that takes a byte[] and returns another byte[] with all of this preamble "stuff" removed. However, what I really want is an extension method so I can write image_buffer.RemoveUpToByteArray(new byte[] { 0x42, 0x4D }); instead of byte[] new_buffer = RemoveUpToByteArray( image_buffer, new byte[] { 0x42, 0x4D }); I first tried to write it like everywhere else I've seen online: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this byte[] buffer, byte[] header) { ... } } but then I get an error complaining that there isn't an extension method where the first parameter is a System.Array. Weird, everyone else seems to do it this way, but okay: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this Array buffer, byte[] header) { ... } } Great, that takes now, but still doesn't compile. It doesn't compile because Array is an abstract class and my existing code that gets called after calling RemoveUpToByteArray used to work on byte arrays. I could rewrite my subsequent code to work with Array, but I am curious -- what am I doing wrong that prevents me from just using byte[] as the first parameter in my extension method?

    Read the article

  • I wants some data of My Facebook Account but it not allowing me?

    - by Pankaj Mishra
    I am using http://lite.facebook.com And i want to get some data from my account. I am using HttpWebRequest for this. I am able to login to facebook from my credential using web request And I got profile url from home page html. Now when i am trying to get list of all friends then its kick me out login page. for login I am using This Code. string HtmlData = httpHelper.getHtmlfromUrl(new Uri(FacebookUrls.Lite_MainpageUrl)); lstInput = globussRegex.GetInputControlsNameAndValueInPage(HtmlData); foreach (string str in lstInput) { if (str.Contains("lsd")) { int FirstPoint = str.IndexOf("name=\"lsd\""); if (FirstPoint > 0) { TempHtmlData = str.Substring(FirstPoint).Replace("name=\"lsd\"","").Replace("value",""); } int SecondPoint = TempHtmlData.IndexOf("/>"); if (SecondPoint > 0) { Value = TempHtmlData.Substring(0, SecondPoint).Replace("=", "").Replace("\\", "").Replace("\"", "").Replace(" ", ""); } } } string LoginData = "form_present=1&lsd=" + Value + "&email=" + UserName + "&password=" + Password + ""; string ResponseData = httpHelper.postFormData(new Uri(FacebookUrls.Lite_LoginUrl), LoginData); int FirstProfileTag = ResponseData.IndexOf("/p/"); int SecondProfileTag = ResponseData.IndexOf("\">Profile"); if (FirstProfileTag > 0 && SecondProfileTag > 0) { string TempProfileUrl = ResponseData.Substring(FirstProfileTag, SecondProfileTag - FirstProfileTag); string ProfileUrl = FacebookUrls.Lite_ProfileUrl + TempProfileUrl; GetUserProfileData(ProfileUrl); } And For getting Profile Url And FriendList Url Iam doing This string HtmlData = httpHelper.getHtmlfromUrl(new Uri(ProfileUrl)); string FriendUrl = "http://lite.facebook.com" + "/p/Pankaj-Mishra/1187787295/friends/"; string HtmlData1 = httpHelper.getHtmlfromUrl(new Uri(FriendUrl)); I got perfect result when i tried for ProfileUrl. but when i tried for frindUrl its logged out how can i solve this problem Plz help me.

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • ASP.NET MVC - Entending the Authorize Attribute

    - by Mad Halfling
    Hi folks, currently I use [Authorize(Roles = ".....")] to secure my controller actions on my ASP.NET MVC 1 app, and this works fine. However, certain search views need to have buttons that route to these actions that need to be enabled/disabled based on the record selected on the search list, and also the security privs of the user logged in. Therefore I think I need to have a class accessing a DB table which cross-references these target controller/actions with application roles to determine the state of these buttons. This will, obviously, make things messy as privs will need to be maintained in 2 places - in that class/DB table and also on the controller actions (plus, if I want to change the access to the action I will have to change the code and compile rather than just change a DB table entry). Ideally I would like to extend the [Authorize] functionality so that instead of having to specify the roles in the [Authorize] code, it will query the security class based on the user, controller and action and that will then return a boolean allowing or denying access. Are there any good articles on this - I can't imagine it's an unusual thing to want to do, but I seem to be struggling to find anything on how to do it (could be Monday-morning brain). I've started some code doing this, looking at article http://schotime.net/blog/index.php/2009/02/17/custom-authorization-with-aspnet-mvc/ , and it seems to be starting off ok but I can't find the "correct" way to get the calling controller and action values from the httpContext - I could possibly fudge a bit of code to extract them from the request url, but that doesn't seem right to me and I'd rather do it properly. Cheers MH

    Read the article

  • IoC and dataContext disposing in asp.net mvc 2 application

    - by zerkms
    I have the Global.asax like the code below: public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { // .... } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(typeof(IOCControllerFactory)); } } public class IOCControllerFactory : DefaultControllerFactory { private readonly IKernel kernel; public IOCControllerFactory() { kernel = new StandardKernel(new NanocrmContainer()); } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { if (controllerType == null) return base.GetControllerInstance(requestContext, controllerType); var controller = kernel.TryGet(controllerType) as IController; if (controller == null) return base.GetControllerInstance(requestContext, controllerType); var standartController = controller as Controller; if (standartController is IIoCController) ((IIoCController)standartController).SetIoc(kernel); return standartController; } class NanocrmContainer : Ninject.Modules.NinjectModule { public override void Load() { // ... Bind<DomainModel.Entities.db>().ToSelf().InRequestScope().WithConstructorArgument("connection", "Data Source=lims;Initial Catalog=nanocrm;Persist Security Info=True;User ID=***;Password=***"); } } } In this case if somewhere it is the class, defined like: public class UserRepository : IUserRepository { private db dataContext; private IUserGroupRepository userGroupRepository; public UserRepository(db dataContext, IUserGroupRepository userGroupRepository) { this.dataContext = dataContext; this.userGroupRepository = userGroupRepository; } } then the dataContext instance is created (if no one was created in this request scope) by Ninject. So the trouble now is - where to invoke dataContext method .Dispose()?

    Read the article

  • JSP or .ascx equivalent for Scala?

    - by Daniel Worthington
    I'm working on a small MVC "framework" (it's really very small) in Scala. I'd like to be able to write my view files as Scala code so I can get lots of help from the compiler. Pre-compiling is great, but what I really want is a way to have the servlet container automatically compile certain files (my view files) on request so I don't have to shut down Jetty and compile all my source files at once, then start it up again just to see small changes to my HTML. I do this a lot with .ascx files in .NET (the file will contain just one scriptlet tag with a bunch of C# code inside which writes out markup using an XmlWriter) and I love this workflow. You just make changes and then refresh your browser, but it's still getting compiled! I don't have a lot of experience with Java, but it seems possible to do this with JSP as well. I'm wondering if this sort of thing is possible in Scala. I have looked into building this myself (see more info here: http://www.nabble.com/Compiler-API-td12050645.html) but I would rather use something else if it's out there.

    Read the article

  • WebFaultException http status code is not passed

    - by Mike Bantegui
    I have the following service method: <OperationContract()> <WebGet([ResponseFormat]:=WebMessageFormat.Json)> Function ShouldThrowException() As Boolean It's implementation does only one thing, which is to throw a WebFaultException. Public Function ShouldThrowException() As Boolean Implements IRestService.ShouldThrowException Throw New WebFaultException(Of String)("This should fail", HttpStatusCode.BadRequest) End Function My web.config reads as: <endpointBehaviors> <behavior name="WebEndpointBehavior"> <webHttp /> </behavior> </endpointBehaviors> .. snip .. <webHttpBinding> <binding name="WebBinding" crossDomainScriptAccessEnabled="true"> <security mode="None" /> </binding> </webHttpBinding> .. snip .. <service name="RestService"> <endpoint address="" binding="webHttpBinding" behaviorConfiguration="WebEndpointBehavior" bindingConfiguration="WebBinding" contract="IRestService" name="RestService"> </endpoint> </service> When I call ShouldThrowException via my browser, I only get the following: "This should fail" I was expecting to get a 400 Bad Request error on the page. If I inspect the page using FireBug, I see that the HTTP status code that was returned is a 200 OK. According to this blog post, I should be seeing this exception. Except I'm not. What am I doing wrong here?

    Read the article

  • Detection of page refresh / F5 key in ASP.NET MVC 2

    - by Michael
    How would one go about detecting a page refresh / F5 key push on the controller handling the postback? I need to distinguish between the user pressing one of two buttons (e.g., Next, Previous) and when the F5 / page refresh occurs. My scenario is a single wizard page that has different content shown between each invocation of the user pressing the "Next" or "Previous" buttons. The error that I am running into is when the user refreshes the page / presses the F5 key, the browser re-sends the request back to the controller, which is handled as a post-back and the FormCollection type is used to look for the "submitButton" key and obtain its value (e.g., "Next," "Send"). This part was modeled after the post by Dylan Beattie at http://stackoverflow.com/questions/442704/how-do-you-handle-multiple-submit-buttons-in-asp-net-mvc-framework. Maybe I'm trying to bend MVC 2 to where it isn't meant to go but I'd like to stay with the current design in that the underlying database drives the content and order of what is shown. This allows us to add new content into the database without modifying the code the displays the content. Thanks, Michael

    Read the article

  • jeditable not working

    - by zurna
    I did not make any changes on the code. I dont receive any errors but its not working. I must be missing something very very simple here. Any suggestion appreciated... Test link. http://www.aslanyurek.com/inner.asp?Section=commentary&CommentaryID=1 $('.GameStory').editable('content/commentary/index.cs.asp?Process=EditLiveCommentary&CommentaryID=<%=Request.QueryString("CommentaryID")%>', { type : 'textarea', id : 'elementid', name : 'CommentaryDesc', cancel : 'Cancel', submit : 'OK', indicator : '<img src="img/indicator.gif">', tooltip : 'Click to edit...', cssclass : 'someclass' }); <div class="GameStory"> <p><span class="minute">36'</span>Here comes the pressure as Chelsea suddenly begin to up the tempo and Bolton can't keep the ball or clear it out of their own half.</p> <p><span class="minute">34'</span>Yuri Zhirkov, despite his head wound, is having a really good game and seems to be enjoying the freedom that he has been given to attack from his left back berth tonight.</p> <p><span class="minute">27'</span>Drogba whips in a free-kick that Jaaskelainen, for some reason, decides to punch straight into the face of Salomon Kalou who watches on as the ball rolls agonisingly of the post. Scare for Bolton.</p> <p><span class="minute">4'</span>Chelsea face Bolton at 8pm on Tuesday, April 13th.</p> <p><span class="minute">3'</span>Lively start at the Bridge with both sides looking to attack - it is by no means a defensive lineup from Owen Coyle so this could be an interesting game.</p> <p><span class="minute">0'</span>Nicolas Anelka gets the first shot of the match in on goal but its very tamely struck and Jaaskelainen gathers easily.</p> </div>

    Read the article

  • WCF 3.5 - Remove SVC Extension - Special Case

    - by Brandon
    I have several RESTful endpoints like such: System.Security.Role.svc System.Security.User.svc etc. This is meant to be a namespace so our RESTful URL's would look like: /rest/{class namespace}/{actions} I have tried a few examples to get the SVC extension removed when my endpoint has multiple periods in it, however, nothing seems to work. I have tested with the WCF REST Contrib package (http://wcfrestcontrib.codeplex.com/), this example (http://www.west-wind.com/weblog/posts/570695.aspx), and another StackOverflow post (http://stackoverflow.com/questions/355165/how-to-remove-thie-svc-extension-in-restful-wcf-service). This works great when my endpoint is something like this: Echo.svc It will properly remove the SVC extension. Any ideas on how to handle endpoints with multiple periods in the endpoint name? EDIT: After some further testing, I found out that it is failing because whenever you do: string path = HttpContext.Current.Request.AppRelativeCurrentExecutionFilePath; If the endpoint contains multiple periods, it strips off everything after the endpoint causing all of the standard IHttpModule's to fail. Example: If I call http://localhost/services/Echo/test, my relative app file path has a returned value of: ~/echo/test However, if I make a call as http://localhost/services/System.Security.User/test, then my relative app file path has a returned value of: ~/system.security.user I am missing the '/test' on the end in that situation.

    Read the article

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Treebeard admin in Django

    - by Sharath
    I've setup Treebeard in Django and everything seems to have gone well. I tried to setup the admin system and I can see my models being presented in the admin interface. However, when I try to add new data using the admin interface, I get the following error in my template. The code still works fine, and I did a check in my DB and the data seems to be inserted properly. However, the view doesn't seem to load properly. Any idea about what is causing this?? The exception am getting is.. Caught an exception while rendering: Failed lookup for key [request] in u'[{\'action_index\': 0, \'block\': , , , , , , ]}, {\'block\': , , , ], , , , , \n \', ], , ], , , , ], , , \n \', , , , , , , , , ], , ], \n \']}, {\'cl\': , \'root_path\': None, \'actions_on_bottom\': False, \'title\': u\'Select album to change\', \'has_add_permission\': True, \'media\': , \'is_popup\': False, \'action_form\': , \'actions_on_top\': True, \'app_label\': \'gallery\'}, {\'MEDIA_URL\': \'\'}, {\'LANGUAGES\': ((\'ar\', \'Arabic\'), (\'bn\', \'Bengali\'), (\'bg\', \'Bulgarian\'), (\'ca\', \'Catalan\'), (\'cs\', \'Czech\'), (\'cy\', \'Welsh\'), (\'da\', \'Danish\'), (\'de\', \'German\'), (\'el\', \'Greek\'), (\'en\', \'English\'), (\'es\', \'Spanish\'), (\'et\', \'Estonian\'), (\'es-ar\', \'Argentinean Spanish\'), (\'eu\', \'Basque\'), (\'fa\', \'Persian\'), (\'fi\', \'Finnish\'), (\'fr\', \'French\'), (\'ga\', \'Irish\'), (\'gl\', \'Galician\'), (\'hu\', \'Hungarian\'), (\'he\', \'Hebrew\'), (\'hi\', \'Hindi\'), (\'hr\', \'Croatian\'), (\'is\', \'Icelandic\'), (\'it\', \'Italian\'), (\'ja\', \'Japanese\'), (\'ka\', \'Georgian\'), (\'ko\', \'Korean\'), (\'km\', \'Khmer\'), (\'kn\', \'Kannada\'), (\'lv\', \'Latvian\'), (\'lt\', \'Lithuanian\'), (\'mk\', \'Macedonian\'), (\'nl\', \'Dutch\'), (\'no\', \'Norwegian\'), (\'pl\', \'Polish\'), (\'pt\', \'Portuguese\'), (\'pt-br\', \'Brazilian Portuguese\'), (\'ro\', \'Romanian\'), (\'ru\', \'Russian\'), (\'sk\', \'Slovak\'), (\'sl\', \'Slovenian\'), (\'sr\', \'Serbian\'), (\'sv\', \'Swedish\'), (\'ta\', \'Tamil\'), (\'te\', \'Telugu\'), (\'th\', \'Thai\'), (\'tr\', \'Turkish\'), (\'uk\', \'Ukrainian\'), (\'zh-cn\', \'Simplified Chinese\'), (\'zh-tw\', \'Traditional Chinese\')), \'LANGUAGE_BIDI\': False, \'LANGUAGE_CODE\': \'en-us\'}, {}, {\'perms\': , \'messages\': [], \'user\': }, {}]' This happens after I hit the save button in Django admin. This is my admin.py implementation.. class MP_Album_Admin(TreeAdmin): pass admin.site.register(Album,MP_Album_Admin)

    Read the article

  • JSF Render response programmatically

    - by Shamik
    I have one parent page with a parentManagedBean (attached to Session Scope). On click of a button on this parent page, one popup comes which has a childManagedBean (attached to Request scope). Now ChildManagedBean holds a reference to parentManaged bean through JSF's managed property facility. On this popup window, user selects some option which populates a large value object. I use the managed property of childManagedBean to set the values from this large object to that of parentManagedBean. Problem is - The parent page shows a link, on click of which a popup comes, on selection of the popup, the popup disappears and set the values to the parentManaged bean. So far so good, but the newly set values need to appear on the parent page. This is where I am stuck. How to programmatically render the master page/render page when I am at the child managed bean? Is there a way I can get handle of the parent page and refresh it? Note: I'm using JSF 1.1 EDIT- After following the solution of "resubmit-ing the form" from javascript, I am seeing that the old form is getting resubmitting which overwrites all of my changed values.

    Read the article

  • Cocoa Read NSInputStream from FTP connection

    - by Chuck
    Hi, I (apparently) manage to make a ftp connection, but fail to read anything from it, and with good cause: I don't reach the reading until the connection has timed out. Here's my code: NSHost *host = [NSHost hostWithAddress:@"127.0.0.1"]; [NSStream getStreamsToHost:host port:3333 inputStream:&iStream outputStream:&oStream]; NSMutableDictionary *settings = [NSMutableDictionary dictionaryWithCapacity:1]; [settings setObject:(NSString *)NSStreamSocketSecurityLevelTLSv1 forKey:(NSString *)kCFStreamSSLLevel]; [settings setObject:[NSNumber numberWithBool:YES] forKey:(NSString *)kCFStreamSSLAllowsAnyRoot]; [iStream retain]; [iStream setDelegate:self]; [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFReadStreamSetProperty((CFReadStreamRef)iStream, kCFStreamPropertySSLSettings, (CFTypeRef)settings);forKey:NSStreamSocketSecurityLevelKey]; [iStream open]; [oStream retain]; [oStream setDelegate:self]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFWriteStreamSetProperty((CFWriteStreamRef)oStream, kCFStreamPropertySSLSettings, (CFTypeRef)settings); forKey:NSStreamSocketSecurityLevelKey]; [oStream open]; NSMutableData *returnMessage = [NSMutableData dataWithLength: 300]; [iStream read: [returnMessage mutableBytes] maxLength: 300]; NSString *readData = [[NSString alloc] initWithBytes: [returnMessage bytes] length: 300 encoding: NSUTF8StringEncoding]; NSRunAlertPanel(@"response", readData, nil, nil, nil); I have not sent a request to the FTP to switch to ssl yet. Any help is greatly appreciated as I find Xcode quite horrible for debugging (no exception or error msg on failed steps what so ever). Chuck

    Read the article

  • WCF 3.5 Service and multiple http bindings

    - by mortenvpdk
    Hi I can't get my WCF service to work with more than one http binding. In IIS 7 I have to bindings http:/service and http:/service.test both at port 80 In my web.config I have added the baseAddressPrefixFilters but I can't add more than one <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://service"/> <add prefix="http://service.test"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> This gives almost the same error "This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. " as if no filers were specified at all (This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item) If I add only one filter then the service works but only responds on the added filter address. I've also tried with specifing multiple endpoints like (and only one filter): <endpoint address="http://service.test" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> <endpoint address="http://service" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> Then still only the address also specified in the filter works and the other returns this error: Server Error in Application "ISPSERVICE" HTTP Error 400.0 - Bad Request Regards Morten

    Read the article

  • Exception during secure communication implementation

    - by Liran
    hi everyone. im trying to implement simple secured client server communiction using WCF. when im launching mt server everty thing is OK , But when im launching my client im getting this error: Error : An error occurred while making the HTTP request to https://localhost:800 0/ExchangeService. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caus ed by a mismatch of the security binding between the client and the server. this is the server code : Uri address = new Uri("https://localhost:8000/ExchangeService"); WSHttpBinding binding = new WSHttpBinding(); //Set Binding Params binding.Security.Mode = SecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.None; Type contract = typeof(ExchangeService.ServiceContract.ITradeService); ServiceHost host = new ServiceHost(typeof(TradeService)); host.AddServiceEndpoint(contract, binding, address); host.Open(); this is the client configuration (app.config): </client> <bindings> <wsHttpBinding> <binding name="TradeWsHttpBinding"> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType ="None"/> </security> </binding> </wsHttpBinding> </bindings> the security configuration at both the client and the server are the same , and i dont need certificate for the server in that kind of security (transport) so why do i get this exception ???? thanks...

    Read the article

  • Using PHP cURL with an HTTP Debugging Proxy

    - by Kane
    I'm using the app "Fiddler" to debug a GET attempt to a website via PHP cURL. In order to see the cURL traffic I had to specify that the cURL connection use the Fiddler proxy (see code below). $ch = curl_init(); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1); curl_setopt($ch, CURLOPT_PROXY, '127.0.0.1:8888'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HEADERFUNCTION, 'read_header'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_REFERER, "http://domain.com"); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_COOKIEJAR, "my_cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEFILE, "my_cookies.txt"); curl_setopt($ch, CURLOPT_URL, "http://domain.com"); $response = curl_exec($ch); But the problem is that in Fiddler I can only see this: Request (domain.com is just an alias): CONNECT domain.com:80 HTTP/1.1 Response: HTTP/1.1 200 Blind-Connection Established If I manually load the website in a browser Fiddler gives me WAY more information. I can see the cookies, the header information, and what I'm receiving via the GET. Any ideas why Fiddler can't see more useful information from PHP cURL? Edit: I tried turning on the "Enable HTTPS Decryption" option inside Tools / Fiddler Options / HTTPS (which I'm not sure why I'd need to use as I didn't tell cURL to use HTTPS). Unfortunately, by changing this setting I now get a Response of: HTTP/1.1 502 Connection failed Edit: If it helps, the app "Charles" shows me WAY more information than Fiddler, but I really want to figure out Fiddler since I like it better.

    Read the article

  • Calling web service using javascript error

    - by politrok
    Hi , I try to call webservice using json , when i call to web service without excepted parameters , its work , but when i'm try to send parameter , ive got an error : This is my code: function GetSynchronousJSONResponse(url, postData) { var xmlhttp = null; if (window.XMLHttpRequest) xmlhttp = new XMLHttpRequest(); else if (window.ActiveXObject) { if (new ActiveXObject("Microsoft.XMLHTTP")) xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); else xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } url = url + "?rnd=" + Math.random(); // to be ensure non-cached version xmlhttp.open("POST", url, false); xmlhttp.setRequestHeader("Content-Type", "application/json; charset=utf-8"); xmlhttp.send(postData); var responseText = xmlhttp.responseText; return responseText; } function Test() { var result = GetSynchronousJSONResponse('http://localhost:1517/Mysite/Myservice.asmx/myProc', '{"MyParam":"' + 'test' + '"}'); result = eval('(' + result + ')'); alert(result.d); } This is the error : System.InvalidOperationException: Request format is invalid: application/json; charset=utf-8. at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest What is wrong ? Thanks in advance .

    Read the article

  • IPsec tunnel to Android device not created even though there is an IKE SA

    - by Quentin Swain
    I'm trying to configure a VPN tunnel between an Android device running 4.1 and a Fedora 17 Linux box running strongSwan 5.0. The device reports that it is connected and strongSwan statusall returns that there is an IKE SA, but doesn't display a tunnel. I used the instructions for iOS in the wiki to generate certificates and configure strongSwan. Since Android uses a modified version of racoon this should work and since the connection is partly established I think I am on the right track. I don't see any errors about not being able to create the tunnel. This is the configuration for the strongSwan connection conn android2 keyexchange=ikev1 authby=xauthrsasig xauth=server left=96.244.142.28 leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=%any rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem ike=aes256-sha1-modp1024 auto=add This is the output of strongswan statusall Status of IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64): uptime: 20 minutes, since Oct 31 10:27:31 2012 malloc: sbrk 270336, mmap 0, used 198144, free 72192 worker threads: 8 of 16 idle, 7/1/0/0 working, job queue: 0/0/0/0, scheduled: 7 loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic Virtual IP pools (size/online/offline): android-hybrid: 1/0/0 android2: 1/1/0 Listening IP addresses: 96.244.142.28 Connections: android-hybrid: %any...%any IKEv1 android-hybrid: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android-hybrid: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android-hybrid: remote: [%any] uses XAuth authentication: any android-hybrid: child: dynamic === dynamic TUNNEL android2: 96.244.142.28...%any IKEv1 android2: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android2: remote: [C=CH, O=strongSwan, CN=client] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=client" android2: remote: [%any] uses XAuth authentication: any android2: child: 0.0.0.0/0 === 10.0.0.0/24 TUNNEL Security Associations (1 up, 0 connecting): android2[3]: ESTABLISHED 10 seconds ago, 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] android2[3]: Remote XAuth identity: android android2[3]: IKEv1 SPIs: 4151e371ad46b20d_i 59a56390d74792d2_r*, public key reauthentication in 56 minutes android2[3]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 The output of ip -s xfrm policy src ::/0 dst ::/0 uid 0 socket in action allow index 3851 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3844 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket in action allow index 3835 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3828 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3819 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:39 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3812 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:22 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3803 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3796 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 So a xfrm policy isn't being created for the connection, even though there is an SA between device and strongswan. Executing ip -s xfrm policy on the android device results in the following output: src 0.0.0.0/0 dst 10.0.0.2/32 uid 0 dir in action allow index 40 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 96.244.142.28 dst 25.239.33.30 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 10.0.0.2/32 dst 0.0.0.0/0 uid 0 dir out action allow index 33 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 25.239.33.30 dst 96.244.142.28 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 28 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 19 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 12 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:06 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 3 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:07 Logs from charon: 00[DMN] Starting IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64) 00[KNL] listening on interfaces: 00[KNL] em1 00[KNL] 96.244.142.28 00[KNL] fe80::224:e8ff:fed2:18b2 00[CFG] loading ca certificates from '/etc/strongswan/ipsec.d/cacerts' 00[CFG] loaded ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" from '/etc/strongswan/ipsec.d/cacerts/caCert.pem' 00[CFG] loading aa certificates from '/etc/strongswan/ipsec.d/aacerts' 00[CFG] loading ocsp signer certificates from '/etc/strongswan/ipsec.d/ocspcerts' 00[CFG] loading attribute certificates from '/etc/strongswan/ipsec.d/acerts' 00[CFG] loading crls from '/etc/strongswan/ipsec.d/crls' 00[CFG] loading secrets from '/etc/strongswan/ipsec.secrets' 00[CFG] loaded RSA private key from '/etc/strongswan/ipsec.d/private/clientKey.pem' 00[CFG] loaded IKE secret for %any 00[CFG] loaded EAP secret for android 00[CFG] loaded EAP secret for android 00[DMN] loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic 08[NET] waiting for data on sockets 16[LIB] created thread 16 [15338] 16[JOB] started worker thread 16 11[CFG] received stroke: add connection 'android-hybrid' 11[CFG] conn android-hybrid 11[CFG] left=%any 11[CFG] leftsubnet=(null) 11[CFG] leftsourceip=(null) 11[CFG] leftauth=pubkey 11[CFG] leftauth2=(null) 11[CFG] leftid=(null) 11[CFG] leftid2=(null) 11[CFG] leftrsakey=(null) 11[CFG] leftcert=serverCert.pem 11[CFG] leftcert2=(null) 11[CFG] leftca=(null) 11[CFG] leftca2=(null) 11[CFG] leftgroups=(null) 11[CFG] leftupdown=ipsec _updown iptables 11[CFG] right=%any 11[CFG] rightsubnet=(null) 11[CFG] rightsourceip=96.244.142.3 11[CFG] rightauth=xauth 11[CFG] rightauth2=(null) 11[CFG] rightid=%any 11[CFG] rightid2=(null) 11[CFG] rightrsakey=(null) 11[CFG] rightcert=(null) 11[CFG] rightcert2=(null) 11[CFG] rightca=(null) 11[CFG] rightca2=(null) 11[CFG] rightgroups=(null) 11[CFG] rightupdown=(null) 11[CFG] eap_identity=(null) 11[CFG] aaa_identity=(null) 11[CFG] xauth_identity=(null) 11[CFG] ike=aes256-sha1-modp1024 11[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 11[CFG] dpddelay=30 11[CFG] dpdtimeout=150 11[CFG] dpdaction=0 11[CFG] closeaction=0 11[CFG] mediation=no 11[CFG] mediated_by=(null) 11[CFG] me_peerid=(null) 11[CFG] keyexchange=ikev1 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[CFG] left nor right host is our side, assuming left=local 11[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 11[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 11[CFG] added configuration 'android-hybrid' 11[CFG] adding virtual IP address pool 'android-hybrid': 96.244.142.3/32 13[CFG] received stroke: add connection 'android2' 13[CFG] conn android2 13[CFG] left=96.244.142.28 13[CFG] leftsubnet=0.0.0.0/0 13[CFG] leftsourceip=(null) 13[CFG] leftauth=pubkey 13[CFG] leftauth2=(null) 13[CFG] leftid=(null) 13[CFG] leftid2=(null) 13[CFG] leftrsakey=(null) 13[CFG] leftcert=serverCert.pem 13[CFG] leftcert2=(null) 13[CFG] leftca=(null) 13[CFG] leftca2=(null) 13[CFG] leftgroups=(null) 13[CFG] leftupdown=ipsec _updown iptables 13[CFG] right=%any 13[CFG] rightsubnet=10.0.0.0/24 13[CFG] rightsourceip=10.0.0.2 13[CFG] rightauth=pubkey 13[CFG] rightauth2=xauth 13[CFG] rightid=(null) 13[CFG] rightid2=(null) 13[CFG] rightrsakey=(null) 13[CFG] rightcert=clientCert.pem 13[CFG] rightcert2=(null) 13[CFG] rightca=(null) 13[CFG] rightca2=(null) 13[CFG] rightgroups=(null) 13[CFG] rightupdown=(null) 13[CFG] eap_identity=(null) 13[CFG] aaa_identity=(null) 13[CFG] xauth_identity=(null) 13[CFG] ike=aes256-sha1-modp1024 13[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 13[CFG] dpddelay=30 13[CFG] dpdtimeout=150 13[CFG] dpdaction=0 13[CFG] closeaction=0 13[CFG] mediation=no 13[CFG] mediated_by=(null) 13[CFG] me_peerid=(null) 13[CFG] keyexchange=ikev0 13[KNL] getting interface name for %any 13[KNL] %any is not a local address 13[KNL] getting interface name for 96.244.142.28 13[KNL] 96.244.142.28 is on interface em1 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 13[CFG] id '96.244.142.28' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=client" from 'clientCert.pem' 13[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=client' 13[CFG] added configuration 'android2' 13[CFG] adding virtual IP address pool 'android2': 10.0.0.2/32 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 15[CFG] looking for an ike config for 96.244.142.28...208.54.35.241 15[CFG] candidate: %any...%any, prio 2 15[CFG] candidate: 96.244.142.28...%any, prio 5 15[CFG] found matching ike config: 96.244.142.28...%any with prio 5 01[JOB] next event in 29s 999ms, waiting 15[IKE] received NAT-T (RFC 3947) vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02 vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02\n vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-00 vendor ID 15[IKE] received XAuth vendor ID 15[IKE] received Cisco Unity vendor ID 15[IKE] received DPD vendor ID 15[IKE] 208.54.35.241 is initiating a Main Mode IKE_SA 15[IKE] IKE_SA (unnamed)[1] state change: CREATED => CONNECTING 15[CFG] selecting proposal: 15[CFG] proposal matches 15[CFG] received proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_256/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:3DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:3DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024 15[CFG] configured proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/AES_CBC_192/AES_CBC_256/3DES_CBC/CAMELLIA_CBC_128/CAMELLIA_CBC_192/CAMELLIA_CBC_256/HMAC_MD5_96/HMAC_SHA1_96/HMAC_SHA2_256_128/HMAC_SHA2_384_192/HMAC_SHA2_512_256/AES_XCBC_96/AES_CMAC_96/PRF_HMAC_MD5/PRF_HMAC_SHA1/PRF_HMAC_SHA2_256/PRF_HMAC_SHA2_384/PRF_HMAC_SHA2_512/PRF_AES128_XCBC/PRF_AES128_CMAC/MODP_2048/MODP_2048_224/MODP_2048_256/MODP_1536/MODP_4096/MODP_8192/MODP_1024/MODP_1024_160 15[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 15[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 15[MGR] checkin IKE_SA (unnamed)[1] 15[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 08[NET] waiting for data on sockets 07[MGR] checkout IKE_SA by message 07[MGR] IKE_SA (unnamed)[1] successfully checked out 07[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 07[LIB] size of DH secret exponent: 1023 bits 07[IKE] remote host is behind NAT 07[IKE] sending cert request for "C=CH, O=strongSwan, CN=strongSwan CA" 07[ENC] generating NAT_D_V1 payload finished 07[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 07[MGR] checkin IKE_SA (unnamed)[1] 07[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 10[IKE] ignoring certificate request without data 10[IKE] received end entity cert "C=CH, O=strongSwan, CN=client" 10[CFG] looking for XAuthInitRSA peer configs matching 96.244.142.28...208.54.35.241[C=CH, O=strongSwan, CN=client] 10[CFG] candidate "android-hybrid", match: 1/1/2/2 (me/other/ike/version) 10[CFG] candidate "android2", match: 1/20/5/1 (me/other/ike/version) 10[CFG] selected peer config "android2" 10[CFG] certificate "C=CH, O=strongSwan, CN=client" key: 2048 bit RSA 10[CFG] using trusted ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" 10[CFG] checking certificate status of "C=CH, O=strongSwan, CN=client" 10[CFG] ocsp check skipped, no ocsp found 10[CFG] certificate status is not available 10[CFG] certificate "C=CH, O=strongSwan, CN=strongSwan CA" key: 2048 bit RSA 10[CFG] reached self-signed root ca with a path length of 0 10[CFG] using trusted certificate "C=CH, O=strongSwan, CN=client" 10[IKE] authentication of 'C=CH, O=strongSwan, CN=client' with RSA successful 10[ENC] added payload of type ID_V1 to message 10[ENC] added payload of type SIGNATURE_V1 to message 10[IKE] authentication of 'C=CH, O=strongSwan, CN=vpn.strongswan.org' (myself) successful 10[IKE] queueing XAUTH task 10[IKE] sending end entity cert "C=CH, O=strongSwan, CN=vpn.strongswan.org" 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 10[IKE] activating new tasks 10[IKE] activating XAUTH task 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 01[JOB] next event in 3s 999ms, waiting 10[MGR] checkin IKE_SA android2[1] 10[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 12[MGR] checkout IKE_SA by message 12[MGR] IKE_SA android2[1] successfully checked out 12[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 12[MGR] checkin IKE_SA android2[1] 12[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 16[MGR] checkout IKE_SA by message 16[MGR] IKE_SA android2[1] successfully checked out 16[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 16[IKE] XAuth authentication of 'android' successful 16[IKE] reinitiating already active tasks 16[IKE] XAUTH task 16[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 16[MGR] checkin IKE_SA android2[1] 01[JOB] next event in 3s 907ms, waiting 16[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 09[MGR] checkout IKE_SA by message 09[MGR] IKE_SA android2[1] successfully checked out 09[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] .8rS 09[IKE] IKE_SA android2[1] established between 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] 09[IKE] IKE_SA android2[1] state change: CONNECTING => ESTABLISHED 09[IKE] scheduling reauthentication in 3409s 09[IKE] maximum IKE_SA lifetime 3589s 09[IKE] activating new tasks 09[IKE] nothing to initiate 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 09[MGR] checkout IKE_SA 09[MGR] IKE_SA android2[1] successfully checked out 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 01[JOB] next event in 3s 854ms, waiting 08[NET] waiting for data on sockets 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[MGR] checkout IKE_SA by message 14[MGR] IKE_SA android2[1] successfully checked out 14[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[IKE] processing INTERNAL_IP4_ADDRESS attribute 14[IKE] processing INTERNAL_IP4_NETMASK attribute 14[IKE] processing INTERNAL_IP4_DNS attribute 14[IKE] processing INTERNAL_IP4_NBNS attribute 14[IKE] processing UNITY_BANNER attribute 14[IKE] processing UNITY_DEF_DOMAIN attribute 14[IKE] processing UNITY_SPLITDNS_NAME attribute 14[IKE] processing UNITY_SPLIT_INCLUDE attribute 14[IKE] processing UNITY_LOCAL_LAN attribute 14[IKE] processing APPLICATION_VERSION attribute 14[IKE] peer requested virtual IP %any 14[CFG] assigning new lease to 'android' 14[IKE] assigning virtual IP 10.0.0.2 to peer 'android' 14[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 14[MGR] checkin IKE_SA android2[1] 14[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 08[NET] waiting for data on sockets 01[JOB] got event, queuing job for execution 01[JOB] next event in 91ms, waiting 13[MGR] checkout IKE_SA 13[MGR] IKE_SA android2[1] successfully checked out 13[MGR] checkin IKE_SA android2[1] 13[MGR] check-in of IKE_SA successful. 01[JOB] got event, queuing job for execution 01[JOB] next event in 24s 136ms, waiting 15[MGR] checkout IKE_SA 15[MGR] IKE_SA android2[1] successfully checked out 15[MGR] checkin IKE_SA android2[1] 15[MGR] check-in of IKE_SA successful.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >