Search Results

Search found 5623 results on 225 pages for 'prevent deletion'.

Page 193/225 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Does ASP.NET need to be configured for Full Trust to implement 'PageHandlerFactory' ?

    - by Kev
    Our hosting platform (running IIS6/ASP.NET 2.0) is configured to run under partial trust. In the machine wide web.config file we set the ASP.NET trust level to Medium (and lock to prevent overrides) and use a modified policy file. When trying to add a custom HttpHandler to handle .aspx requests for a website running in this configuration I get the following security exception: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Reflection.Assembly._GetType(String name, Boolean throwOnError, Boolean ignoreCase) +0 System.Reflection.Assembly.GetType(String name, Boolean throwOnError, Boolean ignoreCase) +42 System.Web.Compilation.CompilationUtil.GetTypeFromAssemblies(AssemblyCollection assembliesCollection, String typeName, Boolean ignoreCase) +172 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +291 System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +52 I'm using a class derived from PageHandlerFactory, for example: public class MyPageHandlerFactory : PageHandlerFactory { public override System.Web.IHttpHandler GetHandler(System.Web.HttpContext context, string requestType, string virtualPath, string path) { // CustomPageHandler derives from System.Web.UI.Page return new CustomPageHandler(); } } My web.config httpHandler configuration is as follow: <httpHandlers> <add verb="*" path="*.aspx" type="MyPageHandler.MyPageHandlerFactory" /> </httpHandlers> The documentation for PageHandlerFactory shows that PageHandlerFactory is decorated with the following attributes: [PermissionSetAttribute(SecurityAction.LinkDemand, Unrestricted = true)] [PermissionSetAttribute(SecurityAction.InheritanceDemand, Unrestricted = true)] public class PageHandlerFactory : IHttpHandlerFactory Does this mean that I need to set ASP.NET to run at Full Trust to be able to create my own PageHandlerFactory classes?

    Read the article

  • Suppressing PostSharp Multicast with Attribute

    - by Dan Bryant
    I've recently started experimenting with PostSharp and I found a particularly helpful aspect to automate implementation of INotifyPropertyChanged. You can see the example here. The basic functionality is excellent (all properties will be notified), but there are cases where I might want to suppress notification. For instance, I might know that a particular property is set once in the constructor and will never change again. As such, there is no need to emit the code for NotifyPropertyChanged. The overhead is minimal when classes are not frequently instantiated and I can prevent the problem by switching from an automatically generated property to a field-backed property and writing to the field. However, as I'm learning this new tool, it would be helpful to know if there is a way to tag a property with an attribute to suppress the code generation. I'd like to be able to do something like this: [NotifyPropertyChanged] public class MyClass { public double SomeValue { get; set; } public double ModifiedValue { get; private set; } [SuppressNotify] public double OnlySetOnce { get; private set; } public MyClass() { OnlySetOnce = 1.0; } }

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Old fashioned html onclick return false doesnt in IE work when jquery script included

    - by user292662
    Ok, so im quite new to jquery but found this bizzar problem just now, If we ignore jquery for a second and consider this scenario, if i have two links like below both with an href and both with and onclick event. The first link will not follow the href because the onclick returns false, and the second link will because the onclick returns true. <a href="/page.html" onclick="return false;">Dont follow</a> <a href="/page.html" onclick="return false;">Follow</a> This works just hunky dory in every browser as it should, the thing is, as soon as i include the jQuery script on the page this stops working in all versions of IE which then always follows the href whether the onclick returns false or not. (it continues to work fine in other browsers) Now if i add an event using jquery and call .preventDefault() on the event object instead of doing it the old fashioned way this behaves correctly, and you may say, well just do that then? But i have a site with thousands of lines of code and i am adding jquery support, i dont want to run the risk that i might miss an already defined html onclick="" and break the website. I cant see why jQuery should prevent perfectly normal javascript concepts from working, so is this a jQuery bug or am I missing something?

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • Converted PowerBuilder to ASP.Net browsing Errors

    - by user493325
    I had a powerbuilder application which i converted to web application in the format of ASP.Net (aspx) files. after deploying and publishing the converted web application (copy it and add ASP.Net and network Service AND IUser permissions to enable users to access it) in IIS V6.0 over Windows server 2003 and The ASP.Net version is 2.0 The error messages I get when I browse default.aspx web page are as the following:- Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. Details: To enable the details of this specific error message to be viewable on remote machines, please create a tag within a "web.config" configuration file located in the root directory of the current web application. This tag should then have its "mode" attribute set to "Off". <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's configuration tag to point to a custom error page URL. <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/> </system.web> </configuration> Another error message appears on the server is:- Server Error in '/' Application. Configuration Error <roleManager enabled="true"> <membership> </roleManager> Thanks in Advance...

    Read the article

  • how can we apply client side validation on fileupload control in ASP.NET to check filename contain s

    - by subodh
    I am working on ASP.NET3.5 platform. I have used a file upload control and a asp button to upload a file. Whenever i try to upload a file which contain special characterlike (file#&%.txt) it show crash and give the messeage Server Error in 'myapplication' Application. A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. how can i prevent this crash using javascript at client side?

    Read the article

  • Reading server error messages for a URLLoader

    - by Rudy
    Hello, I have an URL loader with the following code: public function getUploadURL():void { var request:URLRequest = new URLRequest(); var url:String = getPath(); // Adds time to prevent caching url += "&time=" + new Date().getTime(); request.url = url; request.method = URLRequestMethod.GET; _loader = new URLLoader(); _loader.dataFormat = URLLoaderDataFormat.TEXT; _loader.addEventListener(Event.COMPLETE, getBaseURL); _loader.addEventListener(IOErrorEvent.IO_ERROR, onGetUploadURLError); _loader.addEventListener(HTTPStatusEvent.HTTP_STATUS, getHttpStatus); _loader.load(request); } My problem is that this request might be wrong, and so the server will give me a back a 400 Bad Request, with a message to explain the error. If the Event.COMPLETE, I can see some message (a response) back from the server in the "data" field of the Event, but if onGetUploadURLError or getHttpStatus is called, it just says that the error code is 400 but does not show me the message associated with it. The "data" field is undefined in getHttpStatus and it is "" in onGetUploadURLError. On the contrary, in getBaseURL, I get: {"ResponseMetadata":{...}} I checked and I do get a similar response in my browser for a wrong request, but I cannot see it. Any idea how I can please get the message? Thank you very much, Rudy

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • stop background of iphone webapp from responding to swipes

    - by JoeS
    I'm making a mobile version of my website, and trying to make it feel as native as possible on the iphone. However, any background areas of the page respond to swiping gestures such that you can shift the page partway off the screen. Specifically, if the user touches and swipes left, for example, the content shifts off the edge of the screen and one can see a gray background 'behind' the page. How can this be prevented? I'd like to have the page itself be 320x480 with scroll-swiping disabled (except on list elements that I choose). I have added the following meta tags to the top of the page: <meta name="viewport" content="width=320; height=480; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"/> <meta name="apple-mobile-web-app-capable" content="yes" /> <meta name="apple-mobile-web-app-status-bar-style" content="black" /> I've also tried the following as the event handler for the touchstart, touchmove, and touchend events of the body element: function cancelTouchEvent(e) { if (!e) var e = window.event; e.cancelBubble = true; if (e.stopPropagation) e.stopPropagation(); if (e.preventDefault) e.preventDefault(); return false; } It doesn't stop the swiping behavior, but does prevent clicks on all links on the page... Any help is much appreciated. Thanks!

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • how to make an import library

    - by user295030
    a requirement was sent to me below: API should be in the form of static library. company xxx will link the library into a third party application to prevent any possible exposure of the code(dll) could they mean an import library? An import library is a library that automates the process of loading and using a dynamic library. On Windows, this is typically done via a small static library (.lib) of the same name as the dynamic library (.dll). The static library is linked into the program at compile time, and then the functionality of the dynamic library can effectively be used as if it were a static library. this might be what they might be eluding to.....I am not sure how to make this in vs2008 . Additional facts: I have a static lib that i use in my current application. Now, I have to convert my app that uses that static lib into an import lib so that they can use a third party prog to access the API's they providede me which in turn will use that static lib i am using. I hope I am clearly explaining this. I am just not sure how to go about it in vs2008. I am looking for specific steps to do this. I already have the coding done. Just need to convert it into the form they are asking and I have to provide the API they want. Other than that then I need to create a test prog which will act as that third party prog so I can make sure my import library works.

    Read the article

  • Handles Comparison: empty classes vs. undefined classes vs. void*

    - by Nawaz
    Microsoft's GDI+ defines many empty classes to be treated as handles internally. For example, (source GdiPlusGpStubs.h) //Approach 1 class GpGraphics {}; class GpBrush {}; class GpTexture : public GpBrush {}; class GpSolidFill : public GpBrush {}; class GpLineGradient : public GpBrush {}; class GpPathGradient : public GpBrush {}; class GpHatch : public GpBrush {}; class GpPen {}; class GpCustomLineCap {}; There are other two ways to define handles. They're, //Approach 2 class BOOK; //no need to define it! typedef BOOK *PBOOK; typedef PBOOK HBOOK; //handle to be used internally //Approach 3 typedef void* PVOID; typedef PVOID HBOOK; //handle to be used internally I just want to know the advantages and disadvantages of each of these approaches. One advantage with Microsoft's approach is that, they can define type-safe hierarchy of handles using empty classes, which (I think) is not possible with the other two approaches. What else? EDIT: One advantage with the second approach (i.e using incomplete classes) is that we can prevent clients from dereferencing the handles (that means, this approach appears to support encapsulation strongly, I suppose). The code would not even compile if one attempts to dereference handles. What else?

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • jQuery ajax returns

    - by Tom
    I think this will be some obvious problem, but I cannot figure it out. I hope someone can help me. So I have a slider with 3 slides - Intro, Question, Submit Now I want to make sure that if the question is answered wrong people cannot slide to Submit. The function to move slide is like this: function changeSlide(slide){ // In case current slide is question check the answer if (jQuery('.modalSteps li.current',base).hasClass('questionStep')){ checkAnswer(jQuery('input[name="question_id"]',base).val(), jQuery('input[name="answer"]:checked',base).val()); } jQuery('.modalSteps li.current',base).fadeOut('fast',function(){ jQuery(this).removeClass('current'); jQuery(slide).fadeIn('fast',function(){ jQuery(slide).addClass('current'); }); }); // In case the new slide is question, load the question if (jQuery(slide).hasClass('questionStep')){ var country = jQuery('input[name="country"]:checked',base).val(); loadQuestion(country); } } Now as you can see on first lines, I am calling function checkAnswer, which takes id of question and id of answer and pass it to the AJAX call. function checkAnswer(question, answer){ jQuery.ajax({ url: window.base_url+'ajax/check_answer/'+question+'/'+answer+'/', success: function(data){ if (!data.success){ jQuery('.question',base).html(data.message); } } }); } The problem i am having is that I cannot say if(checkAnswer(...)){} Because of Ajax it always returns false or undefined. What I need is something like this: function changeSlide(slide){ // In case current slide is question check the answer if (jQuery('.modalSteps li.current',base).hasClass('questionStep')){ if (!checkAnswer(jQuery('input[name="question_id"]',base).val(), jQuery('input[name="answer"]:checked',base).val())){ return false; } } ... So it will prevent the slide from moving on. Now when Im thinking about it, I will probably have slide like "Wrong answer" so I could just move the slide there, but I would like to see the first solution anyway. Thank you for tips

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Remove duplicate records/objects uniquely identified by multiple attributes

    - by keruilin
    I have a model called HeroStatus with the following attributes: id user_id recordable_type hero_type (can be NULL!) recordable_id created_at There are over 100 hero_statuses, and a user can have many hero_statuses, but can't have the same hero_status more than once. A user's hero_status is uniquely identified by the combination of recordable_type + hero_type + recordable_id. What I'm trying to say essentially is that there can't be a duplicate hero_status for a specific user. Unfortunately, I didn't have a validation in place to assure this, so I got some duplicate hero_statuses for users after I made some code changes. For example: user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2010-05-03 18:30:30' user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2009-03-03 15:30:00' user_id = 18 recordable_type = 'Good' hero_type = 'Hugs' recordable_id = 1 created_at = '2009-02-03 12:30:00' user_id = 18 recordable_type = 'Good' hero_type = NULL recordable_id = 2 created_at = '2009-012-03 08:30:00' (Last two are not a dups obviously. First two are.) So what I want to do is get rid of the duplicate hero_status. Which one? The one with the most-recent date. I have three questions: How do I remove the duplicates using a SQL-only approach? How do I remove the duplicates using a pure Ruby solution? Something similar to this: http://stackoverflow.com/questions/2790004/removing-duplicate-objects. How do I put a validation in place to prevent duplicate entries in the future?

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Dangling pointers and double free

    - by user151410
    After some painful experiences, I understand the problem of dangling pointers and double free. I am seeking proper solutions. aStruct has a number of fields including other arrays. aStruct *A=NULL, *B = NULL; A = (aStruct*) calloc(1, sizeof(sStruct)); B = A; free_aStruct(A); ... //bunch of other code in various places. ... free_aStruct(B); Is there any way to write free_aStruct(X) so that free_aStruct(B) exists gracefully?? void free_aStruct(aStruct *X){ if (X ! = NULL){ if (X->a != NULL){free(X->a); x->a = NULL;} free(X); X = NULL; } } Doing above only sets A = NULL when free_aStruct(A); is called. B is now dangling. How can this situation be avoided / remedied? Is reference counting the only viable solution? or, are there other "defensive" free approaches, to prevent free_aStruct(B); from exploding? Thanks, Russ

    Read the article

  • Rails - Seeking a Dry authorization method compatible with various nested resources

    - by adam
    Consensus is you shouldn't nest resources deeper than 1 level. So if I have 3 models like this (below is just a hypothetical situation) User has_many Houses has_many Tenants and to abide by the above i do map.resources :users, :has_many => :houses map.resorces :houses, :has_many => :tenants Now I want the user to be able edit both their houses and their tenants details but I want to prevent them from trying to edit another users houses and tenants by forging the user_id part of the urls. So I create a before_filter like this def prevent_user_acting_as_other_user if User.find_by_id(params[:user_id]) != current_user() @current_user_session.destroy flash[:error] = "Stop screwing around wiseguy" redirect_to login_url() return end end for houses that's easy because the user_id is passed via edit_user_house_path(@user, @house) but in the tenents case tenant house_tenent_path(@house) no user id is passed. But I can get the user id by doing @house.user.id but then id have to change the code above to this. def prevent_user_acting_as_other_user if params[:user_id] @user = User.find(params[:user_id] elsif params[:house_id] @user = House.find(params[:house_id]).user end if @user != current_user() #kick em out end end It does the job, but I'm wondering if there is a more elegant way. Every time I add a new resource that needs protecting from user forgery Ill have to keep adding conditionals. I don't think there will be many cases but would like to know a better approach if one exists.

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • How do I get the WVGA Android browser to stop scaling my images?

    - by Dan Fabulich
    I'm designing an HTML page for display in Android browsers. Consider this simple example page: <html> <head><title>Simple!</title> </head> <body> <p><img src="http://sstatic.net/so/img/logo.png"></p> </body> </html> It looks just fine on the standard HVGA phones (320x480), but on HDPI WVGA sizes (480x800 or 480x854) the built-in browser automatically scales the image up; it looks ugly. I've read that I should be able to use this tag to force the browser to stop scaling my page: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; minimum-scale=1.0; user-scalable=0;" /> ... but all that does is disable user scaling (the zoom buttons disappear); it doesn't actually prevent the browser from scaling my image. Adjusting the scale factors (setting them all to 2.0 or 0.5) has no effect at all. How can I force the WVGA browser to stop scaling my images?

    Read the article

  • Interrupting Prototype handler, alert() vs event.stop()

    - by lxs
    Here's the test page I'm using. This version works fine, forwarding to #success: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head> <script type="text/javascript" src="prototype.js"></script> </head><body> <form id='form' method='POST' action='#fail'> <button id='button'>Oh my giddy aunt!</button> <script type="text/javascript"> var fn = function() { $('form').action = "#success"; $('form').submit(); } $('button').observe('mousedown', fn); </script> </form> </body></html> If I empty the handler: var fn = function() { } The form is submitted, but of course we are sent to #fail this time. With an alert in the handler: var fn = function() { alert("omg!"); } The form is not submitted. This is awfully curious. With event.stop(), which is supposed to prevent the browser taking the default action: var fn = function(event) { event.stop(); } We are sent to #fail. So alert() is more effective at preventing a submission than event.stop(). What gives? I'm using Firefox 3.6.3 and Prototype 1.6.0.3. This behaviour also appears in Prototype 1.6.1.

    Read the article

  • Postback Removing Styling from Page

    - by Roy
    Hi, Currently I've created a ASP.Net page that has a dropdown control with autopostback set to true. I've also added color backgrounds for individual listitems. Whenever an item is selected in the dropdown control the styling is completely removed from all of the list items. How can I prevent this from happening? I need the postback to pull data based on the dropdown item that is selected. Here is my code. aspx file: <asp:DropDownList ID="EmpDropDown" AutoPostBack="True" OnSelectedIndexChanged="EmpDropDown_SelectedIndexChanged" runat="server"> </asp:DropDownList> <asp:TextBox ID="MessageTextBox" TextMode="MultiLine" Width="550" Height="100px" runat="server"></asp:TextBox> aspx.cs code behind: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GetEmpList(); } } protected void EmpDropDown_SelectedIndexChanged(object sender, EventArgs e) { GetEmpDetails(); } private void GetEmpList() { SqlDataReader dr = ToolsLayer.GetEmpList(); int currentIndex = 0; while (dr.Read()) { EmpDropDown.Items.Add(new ListItem(dr["Title"].ToString(), dr["EmpKey"].ToString())); if (dr["Status"].ToString() == "disabled") { EmpDropDown.Items[currentIndex].Attributes.Add("style", "background-color:red;"); } currentIndex++; } dr.Close(); } private void GetEmpDetails() { SqlDataReader dr = ToolsLayer.GetEmpDetails(EmpDropDown.SelectedValue); while (dr.Read()) { MessageTextBox.Text = dr["Message"].ToString(); } dr.Close(); } Thank You

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >