Search Results

Search found 25391 results on 1016 pages for 'update notification'.

Page 265/1016 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • munin-limits error, Missing configuration options for contact admin; skipping

    - by jjames
    I am getting an error when trying to configure email alerts with Munin. I've tried with both version 1.4.7 and 2.0.6. My config file: contacts admin contact.admin.command mail -s "Munin notification ${var:host}" [email protected] contact.admin.always_send warning critical [Production;server1] use_node_name yes address server1addr contacts enielson The error, 2012/08/31 16:47:24 [WARNING] Missing configuration options for contact admin; skipping How can I fix this?

    Read the article

  • What does this code from AuthKit do? (where are these functions and methods defined?)

    - by Beau Simensen
    I am trying to implement my own authentication method for AuthKit and am trying to figure out how some of the built-in methods work. In particular, I'm trying to figure out how to update the REMOTE_USER for environ correctly. This is how it is handled inside of authkit.authenticate.basic but it is pretty confusing. I cannot find anyplace where REMOTE_USER and AUTH_TYPE are defined. Is there something strange going on here and if so, what is it? def __call__(self, environ, start_response): environ['authkit.users'] = self.users result = self.authenticate(environ) if isinstance(result, str): AUTH_TYPE.update(environ, 'basic') REMOTE_USER.update(environ, result) return self.application(environ, start_response) There are actually a number of all uppercase things like this that I cannot find a definition for. For example, where does AUTHORIZATION come from below: def authenticate(self, environ): authorization = AUTHORIZATION(environ) if not authorization: return self.build_authentication() (authmeth, auth) = authorization.split(' ',1) if 'basic' != authmeth.lower(): return self.build_authentication() auth = auth.strip().decode('base64') username, password = auth.split(':',1) if self.authfunc(environ, username, password): return username return self.build_authentication() I feel like maybe I am missing some special syntax handling for the environ dict, but it is possible that there is something else really weird going on here that isn't immediately obvious to someone as new to Python as myself.

    Read the article

  • vlad the deployer: why do I need a scm folder?

    - by egarcia
    I'm learning to use vlad the deployer and I've got a question. Since I'm still learning I don't know what is pertinent to the question and what isn't, so please bear with me if I'm a little verbose. I've got 2 environments for a new application (test and production) besides my development machine. I've figured out this way to do the initial setup in my vlad.rake: namespace :test task :set set :domain, 'test.myserver.com' end end namespace :production task :set set :domain, 'www.myserver.com' end end This way I can have environment-specific stuff inside the namespaces, and still have shared tasks. For example, this would be the initial setup for test: rake vlad:test:set vlad:setup vlad:update This creates the following folders on my test server: releases/ scm/ shared/ current -> symlink to last release (inside the releases folder) My question is: what's the point of the scm folder? Every time I do vlad:update, the following happens: svn checkout on the scm/ folder above svn export on the /releases/{date} folder update current symlink So scm is a copy of my repository... but then there's an "export" copy of the repository on /releases/{date}. And that is the one used by the application... scm doesn't seem to be used by anyone? Wouldn't I be just fine without the scm folder?

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • Java invokeAndWait of C# Action Delegate

    - by ikurtz
    the issue i mentioned in this post is actually happening because of cross threading GUI issues (i hope). could you help me with Java version of action delegate please? in C# it is done as this inline: this.Invoke(new Action(delegate() {...})); how is this achived in Java? thank you. public class processChatMessage implements Observer { public void update(Observable o, Object obj) { System.out.println("class class class" + obj.getClass()); if (obj instanceof String){ String msg = (String)obj; formatChatHeader(chatHeader.Away, msg); jlStatusBar.setText("Message Received"); // Show chat form setVisibility(); } } } processChatMessage is invoked by a separate thread triggered by receiving new data from a remote node. and i think the error is being produced as it trying to update GUI controls. do you think this is the reason? i ask because im new to Java and C#, but this is what is going on i think. SOLUTION: public class processChatMessage implements Observer { public void update(Observable o, Object obj) { if (obj instanceof String){ final String msg = (String)obj; try { SwingUtilities.invokeAndWait(new Runnable( ) { public void run( ) { formatChatHeader(chatHeader.Away, msg); jlStatusBar.setText("Message Received"); setVisibility(); } }); } catch (InterruptedException e){ } catch (InvocationTargetException e){ } } } }

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • Thread implemented as a Singleton

    - by rocknroll
    Hi all, I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure: void <ProtocolClassName>::run() { while(!mStop) //check whether screen is closed or not { mutex.lock() while(!waitcondition.wait(&mutex,5)) { if(mStop) return; } //Code for receiving and processing incoming data mutex.unlock(); } //end while } Hierarchy of GUI. 1.Login screen. 2. Screen of action. When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using <ProtocolName>::instance() function Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation. Thanks in advance A Hapless programmer

    Read the article

  • Comparing RPG to C# and SQL.

    - by Kevin
    In an RPG program (One of IBM's languages on the AS/400) I can "chain" out to a file to see if a record (say, a certain customer record) exists in the file. If it does, then I can update that record instantly with new data. If the record doesn't exist, I can write a new record. The code would look like this: Customer Chain CustFile 71 ;turn on indicator 71 if not found if *in71 ;if 71 is "on" eval CustID = Customer; eval CustCredit = 10000; write CustRecord else ;71 not on, record found. CustCredit = 10000; update CustRecord endif Not being real familiar with SQL/C#, I'm wondering if there is a way to do a random retrieval from a file (which is what "chain" does in RPG). Basically I want to see if a record exists. If it does, update the record with some new information. If it does not, then I want to write a new record. I'm sure it's possible, but not quite sure how to go about doing it. Any advice would be greatly appreciated.

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Problem with grails web app running in production: "No such property: save for class: JsecRole"

    - by Sarah Boyd
    I've got a grails 1.1 web app running great in development but when I try and run it in production with an sqlserver database it crashes in a weird way. The relevant part of my datasource.groovy is as follows: environments { development { dataSource { dbCreate = "create-drop" // one of 'create', 'create-drop','update' url = "jdbc:hsqldb:mem:devDB" } } test { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver" endUsername = "sa" password = "pw4db" url = "jdbc:sqlserver://localhost:1433;databaseName=ReleasePlanner;selectMethod=cursor" The error message I receive is: Message: No such property: save for class: JsecRole Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole Class: ProjectController At Line: [28] Code Snippet: 27: println "###about to create project roles" 28: userManagerService.createProjectRoles(project) 29: userManagerService.addUserToProject(session.user.id.toString(), project, 'owner') } } } The stacktrace is as follows: org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:382) at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:180) Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at UserManagerService.createProjectRoles(UserManagerService.groovy:9) at UserManagerService$$FastClassByCGLIB$$6fa73713.invoke(<generated>) at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149) at UserManagerService$$EnhancerByCGLIB$$fcf60984.createProjectRoles(<generated>) at UserManagerService$createProjectRoles.call(Unknown Source) at ProjectController$_closure4.doCall(ProjectController.groovy:28) at ProjectController$_closure4.doCall(ProjectController.groovy) ... 2 more Any help is appreciated. Thanks Sarah

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Policy file error while loading images form facebook

    - by Fahim Akhter
    Hi, I making a game leaderboard on facebook. I'm not using connect but working inside the canvas. When I try to load the images from facebook it gives me the following error. SecurityError: Error #2122: Security sandbox violation: Loader.content: http://test cannot access http://profile.ak.fbcdn.net/v22941/254/15/q652310588_2173.jpg A policy file is required, but the checkPolicyFile flag was not set when this media was loaded. Here is my loader code public var preLoader:Loader; preLoader=new Loader(); **update** Security.loadPolicyFile('http://api.facebook.com/crossdomain.xml'); Security.allowDomain('http://profile.ak.fbcdn.net'); Security.allowInsecureDomain('http://profile.ak.fbcdn.net'); **update-end** public function imageContainer(Imagewidth:Number,Imageheight:Number,url:String,path:String) { preLoader=new Loader(); Security.loadPolicyFile("http://api.facebook.com/crossdomain.xml"); var context:LoaderContext = new LoaderContext(); context.checkPolicyFile = true; context.applicationDomain = ApplicationDomain.currentDomain; preLoader.load(new URLRequest(path),context); Any Ideas? I am importing the right class though. UPDATE: I am loading the images from a different domain say , calling func http://fahim.com images are from http://profile.ak.fbcdn.net/v22941/254/15/q652310588_2173.jpg something ( I have made sure the pictures are static do not require a facebook login or anything , they are just user public profile pictures)

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • How reliable is DateTime.UtcNow in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. These applications load their data from the server upon start up, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all silverlight clients download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful to thoroughly understand how to solve this in silverlight apps.

    Read the article

  • Observer pattern and violation of Single Principality Rule

    - by Devil Jin
    I have an applet which repaints itself once the text has changed Design 1: //MyApplet.java public class MyApplet extends Applet implements Listener{ private DynamicText text = null; public void init(){ text = new DynamicText("Welcome"); } public void paint(Graphics g){ g.drawString(text.getText(), 50, 30); } //implement Listener update() method public void update(){ repaint(); } } //DynamicText.java public class DynamicText implements Publisher{ // implements Publisher interface methods //notify listeners whenever text changes } Isn't this a violation of Single Responsibility Principle where my Applet not only acts as Applet but also has to do Listener job. Same way DynamicText class not only generates the dynamic text but updates the registered listeners. Design 2: //MyApplet.java public class MyApplet extends Applet{ private AppletListener appLstnr = null; public void init(){ appLstnr = new AppletListener(this); // applet stuff } } // AppletListener.java public class AppletListener implements Listener{ private Applet applet = null; public AppletListener(Applet applet){ this.applet = applet; } public void update(){ this.applet.repaint(); } } // DynamicText public class DynamicText{ private TextPublisher textPblshr = null; public DynamicText(TextPublisher txtPblshr){ this.textPblshr = txtPblshr; } // call textPblshr.notifyListeners whenever text changes } public class TextPublisher implments Publisher{ // implements publisher interface methods } Q1. Is design 1 a SPR violation? Q2. Is composition a better choice here to remove SPR violation as in design 2.

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • KendoUI Mobile switch and datasource

    - by OnaBai
    I'm trying to have a list of items displayed using a listview, something like: <div data-role="view" data-model="my_model"> <ul data-role="listview" data-bind="source: ds" data-template="list-tmpl"></ul> </div> Where I have a view using a model called my_model and a listview where the source is bound to ds. My model is something like: var my_model = kendo.observable({ ds: new kendo.data.DataSource({ transport: { read: readData, update: updateData, create: updateData, remove: updateData }, pageSize: 10, schema: { model: { id: "id", fields: { id: { type: "number" }, name: { type: "string" }, active: { type: "boolean" } } } } }) }); Each item includes an id, a name (that is a string) and a boolean named active. The template used to render each element is: <script id="list-tmpl" type="text/kendo-tmpl"> <span>#= name # : #= active #</span> <input data-role="switch" data-bind="checked: active"/> </script> Where I display the name and (for debugging) the value of active. In addition, I render a switch bound to active. You should see something like: The problems observed are: If you click on a switch you will see that the value of active next to the name changes its value (as expected) but if then, you pick another switch, the value (neither next to name nor in the DataSource) is not updated (despite the value of the switch is correctly updated). The update handler in the DataSource is never invoked (not even for the first chosen switch and despite the DataSource for that first toggled switch is updated). You can check it in JSFiddle: http://jsfiddle.net/OnaBai/K7wEC/ How do I make that the DataSource gets updated and update handler invoked?

    Read the article

  • Best Pattern for AllowUnsafeUpdates

    - by webwires
    So far, in my research I have seen that it is unwise to set AllowUnsafeUpdates on GET request operation to avoid cross site scripting. But, if it is required to allow this, what is the proper way to handle the situation to mitigate any exposure? Here is my best first guess on a reliable pattern if you absolutely need to allow web or site updates on a GET request. Best Practice? protected override void OnLoad(System.EventArgs e) { if(Request.HttpMethod == "POST") { SPUtility.ValidateFormDigest(); // will automatically set AllowSafeUpdates to true } // If not a POST then AllowUnsafeUpdates should be used only // at the point of update and reset immediately after finished // NOTE: Is this true? How is cross-site scripting used on GET // and what mitigates the vulnerability? } // Point of item update SPSecurity.RunWithElevatedPrivledges(delegate() { using(SPSite site = new SPSite(SPContext.Current.Site.Url)) { using (SPWeb web = site.RootWeb) { bool allowUpdates = web.AllowUnsafeUpdates; //store original value web.AllowUnsafeUpdates = true; //... Do something and call Update() ... web.AllowUnsafeUpdates = allowUpdates; //restore original value } } }); Feedback on the best pattern is appreciated.

    Read the article

  • Is it possible to set Windows Authentication on a subfolder of a site having only anonymous authenti

    - by Lieven Cardoen
    If I try to do that, I get following error: HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers. Module IIS Web Core Notification AuthenticateRequest Handler PageHandlerFactory-Integrated Error Code 0x80070005 If I convert the subfolder to an application, it does work, but is it possible without converting it to an appliation? If you deploy such a site, will this subfolder automatically be an appliation or is this information kept in IIS7?

    Read the article

  • My images aren't updating immediately upon changing their src in javascript

    - by Dale
    I'm using the function below to change the img src. It's an array of ten images. When going through the loop, using break points, the images don't all update on the page immediately. Some of them do. If I inspect the unchanged images on the page (while paused at a breakpoint), the src has changed, but the image hasn't changed yet. All of the unchanged images get updated correctly when the function ends. Anyone know why they don't all get updated instantly and how I can force them to update? Also, is there a way I can hold off the updates of all of them until they're all reassigned and thus have them all update on the "simultaneously"? Here's my function. function mainFunction(){ finalSet = calculateSet(); for ( var int = 0; int < finalSet.length; int++) { var fileName = "cardImg" + (int); document.getElementById(fileName).src = "images/cards/" + finalSet[int].name + ".jpg"; } } Thanks for the help. Dale

    Read the article

  • Problem installing mod_jk on Ubuntu karmic apache httpd 2.2.12 and tomcat 6

    - by Deny Prasetyo
    I have a problem when configuring mod_jk on ubuntu i use apache httpd 2.2.12 and tomcat 6 I installed apache httpd and lib mod_jk from synaptic and use default configuration. Here my mod_jk.conf Load mod_jk module Update this path to match your modules location LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Where to find workers.properties Update this path to match your conf directory location JkWorkersFile /etc/apache2/workers.properties Where to put jk logs Update this path to match your logs directory location JkLogFile /etc/apache2/logs/mod_jk.log Set the jk log level [debug/error/info] JkLogLevel info Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T" Send everything for context /ws to worker ajp13 JkMount /themark ajp13 JkMount /themark/* ajp13 Send everything for context /jsp-examples to worker ajp13 JkMount /static ajp13 JkMount /static/* ajp13 Send everything for context /servelts-examples to worker ajp13 JkMount /servlets-examples ajp13 JkMount /servlets-examples/* ajp13 and this my workers.properties Define 1 real worker named ajp13 worker.list=ajp13 Set properties for worker named ajp13 to use ajp13 protocol, and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 when i start apache httpd and tomcat 6. it seem that mod_jk load successfully and tomcat 6 recognized the ajp connector. here my tomcat 6 log INFO: Starting Coyote HTTP/1.1 on http-8080 Mar 29, 2010 11:48:34 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Mar 29, 2010 11:48:34 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/16 config=null here my mod_jk.log [Mon Mar 29 11:06:53 2010][6688:3499775792] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 11:16:59 2010][18277:1983043376] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 11:16:59 2010][18278:1983043376] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized but when i access http://localhost/themark it won't work. it seems that apache httpd can load the mod_jk module but it can listen the ajp. is there somebody ever had same problem? nb: i use the same config in windows using xampplite and it works well

    Read the article

  • Rails fields_for parameters for a has_many relation don't yield an Array in params

    - by user1289061
    I have a model Sensor with has_many and accepts_nested_attributes_for relationships to another model Watch. In a form to update a Sensor, I have something like the following <%= sensor_form.fields_for :watches do |watches_form| %> <%= watches_form.label :label %><br /> <%= watches_form.text_field :label %> <% end %> This is indended to allow editting of the already-created Watches belonging to a Sensor. This call spits form inputs as so: <input name="sensor[watches_attributes][0][label]" ... /> <input name="sensor[watches_attributes][0][id]" ... /> When this gets submitted, the params object in the Sensor controller gets an assoc like "sensor" => { "id"=>"1", "watches_attributes"=> { "0"=>{"id" => "1", "label" => "foo"}, "1"=>{"id" => "2", "label" => "bar"} } } For a has_many, accepts_nested_attributes_for update to work upon the @sensor.update_attributes call, it seems that that attributes key really must map to an Array. From what I've seen in the examples, the combination of has_many, accepts_nested_attributes_for, and sensor_form.fields_for should allow me to pass the resulting params object directly to @sensor.update_attributes and update each related object as intended. Instead the Sensor takes place, with no errors, but the Watch objects are not updated (since "watches_attributes" maps to a Hash instead of an Array?) Have I missed something?

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >