Search Results

Search found 13867 results on 555 pages for 'avoid learning'.

Page 388/555 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • How to fix this python error? RuntimeError: dictionary changed size during iteration

    - by aF
    Hello, it gives me this error: Exception in thread Thread-163: Traceback (most recent call last): File "C:\Python26\lib\threading.py", line 532, in __bootstrap_inner self.run() File "C:\Python26\lib\threading.py", line 736, in run self.function(*self.args, **self.kwargs) File "C:\Users\Public\SoundLog\Code\Código Python\SoundLog\SoundLog.py", line 337, in getInfo self.data1 = copy.deepcopy(Auxiliar.DataCollection.getInfo(1)) File "C:\Python26\lib\copy.py", line 162, in deepcopy y = copier(x, memo) File "C:\Python26\lib\copy.py", line 254, in _deepcopy_dict for key, value in x.iteritems(): RuntimeError: dictionary changed size during iteration while executing my python program. How can I avoid this to happen? Thanks in advance ;)

    Read the article

  • Figuring out what makes a C++ class abstract in VS2008

    - by suszterpatt
    I'm using VS2008 to build a plain old C++ program (not C++/CLI). I have an abstract base class and a non-abstract derived class, and building this: Base* obj; obj = new Derived(); fails with the error "'Derived': cannot instantiate abstract class". (It may be worth noting, however, that if I hover over Base with the cursor, VS will pop up a tooltip saying "class Base abstract", but hovering over Derived will only say "class Derived" (no "abstract")). The definitions of these classes are fairly large and I'd like to avoid manually checking if each method has been overridden. Can VS do this for me somehow? Any general tips on pinpointing the exact parts of the class' definition that make it abstract?

    Read the article

  • Cookie is setting twice (duplicated)

    - by Erik Larsson
    I'm new to cookies, and im trying to set a cookie where to store the referrer (the org ref). But when i try this function: function do_it_cookie() { // Check if cookie exists if (isset($_COOKIE['ref'])) { // It dose exist, do nothing or anything... } else { setcookie ('ref', $_SERVER['HTTP_REFERER'], time() + 60, '/'); header ("Location: http://www.nyttforetag.com/mind-your-own-business/"); } } I want to store the cookie on the user computer for 30 days, if the return i want to know the initial refereeing url. But when i use this and lets say i go to another page in my site and then go back to the homepage its sets a new cookie with the exact same name and with the ref of the previous page. Is there away to avoid this?

    Read the article

  • How to make a file with .pt extension, with xml syntax highlighting and vim's plugin snipmate load p

    - by Somebody still uses you MS-DOS
    I have the following in my .vimrc: au BufNewFile,BufRead *.pt set filetype=xml This is needed because although I'm editing a file with *.pt extension, it's indeed a valid xml file: setting the filetype like this I can have syntax highlighting. I'm using vim's snipmate plugin, and tried to create pt.snippets to specific needs since these files are Zope Page Templates (ZPT with TAL). Now, I have a problem: I don't want to create these snippets in xml.snippets, since they aren't really generic xml snippets, but my *.pt files are set to xml, so when I define my pt snippets they aren't loaded unless I run :set filetype=pt on my pt file on vim - but then I lose syntax highlighting. I would like to be able to have a pt file, with xml syntax highlighting, to be able to load a pt.snippets file from snipmate. How can I do it? (I would like to avoid putting my snippets in a generic snippet file, I would like it to be present only in pt.snippets to be easier to maintain.)

    Read the article

  • Example where Up-Front Design (Would have) Saved You Time

    - by Winston Ewert
    In various places I've seen the claim that by designing a system up-front, you can significantly reduce development time. I.e. by spending an hour designing you can save a week coding. My problem is that I have never seen a situation where I found this to be true. So I want to know of any examples out there that people have where this would be true: So: What sort of problem arose during coding? (or was avoided?) How could you have avoided (or did avoid) the problem by spending more time doing design? Why was it (or would it have been) hard to fix the problem in the code?

    Read the article

  • Keeping track of dirty blocks on a block device

    - by mikeY
    I'm looking for a way to keep track of what blocks on a block device are modified after a point in time. How I eventually want to use this for is to keep two 2TB disks in sync, one which only comes online (connected through USB) once a month. Without knowing what blocks have been modified, I have to go through the whole 2TB every time. I'm using a recent GNU/Linux OS and have C and Python experience. I'm hoping to avoid writing kernel level code as I don't have any experience in that area whatsoever. My current theory is that there should be some hooks somewhere where my code can get called when a disk flush is performed. Any ideas?

    Read the article

  • CoreData and many NSArrayController

    - by unixo
    In my CoreData Application, I've an outline view on left of main window, acting as source list (like iTunes); on the right I display a proper view, based on outline selection. Each view has its components, such as table view, connected to array controller, owned by the specific view. Very often different views display same data, for example, a table view of the same entity. From a performance point of view, is better to have a single array controller per entity and share it between all views or does CoreData cache avoid memory waste?

    Read the article

  • What are the common patterns in web programming?

    - by lankerisms
    I have been trying to write my first big web app (more than one cgi file) and as I kept moving forward with the rough prototype, paralelly trying to predict more tasks, this is the todo that got accumulated (In no particular order). * Validations and input sanitizations * Object versioning (to avoid edit conflicts. I dont want hard locks) * Exception handling * memcache * xss and injection protections * javascript * html * ACLs * phonetics in search, match and find duplicates (for form validation) * Ajaxify!!! (I have snipped off the project specific items.) I know that each todo will be quite tied up to its project and technologies used. What I am wondering though, is if there is a pattern in your todo items as well as the sequence in which you experienced guys have come across them.

    Read the article

  • Rails + simple role system through associative table

    - by user202411
    So I have the Ninja model which has many Hovercrafts through ninja_hovercrafts (which stores the ninja_id and the hovercraft_id). It is of my understanding that this kind of arrangement should be set in a way that the associative table stores only enough information to bind two different classes. But I'd like to use the associative table to work as a very streamlined authorization hub on my application. So i'd also like this table to inform my system if this binding makes the ninja the pilot or co-pilot of a given hovercraft, through a "role" field in the table. My questions are: Is this ugly? Is this normal? Are there methods built into rails that would help me to automagically create Ninjas and Hovercrafts associations WITH the role? For exemple, could I have a nested form to create both ninjas and hcs in a way that the role field in ninjas_hovercrafts would be also filled? If managing my application roles this way isn't a good idea, whats the non-resource heavy alternative (my app is being designed trying to avoid scalability problems such as excessive joins, includes, etc) thank you

    Read the article

  • Is it possible to come over the time out issue for a function call in C#?

    - by infant programmer
    In my program I call a method xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); The problem I am facing is: sometimes this function doesn't execute well within the time. Sometimes compiler raises the time out issue after a long time of trial.. which inturn causes this part of application to shut. That is what I want to avoid. So if it exceeds certain time say 10 seconds I need to recall the method. Is it possible to add some code lines adjacent to this, which can meet the requirement?

    Read the article

  • How to use switch statement with Enumerations C#

    - by Maximus Decimus
    I want to use a switch statement in order to avoid many if's. So I did this: public enum Protocol { Http, Ftp } string strProtocolType = GetProtocolTypeFromDB(); switch (strProtocolType) { case Protocol.Http: { break; } case Protocol.Ftp: { break; } } but I have a problem of comparing an Enum and a String. So if I added Protocol.Http.ToString() there is another error because it allows only CONSTANT evaluation. If I change it to this switch (Enum.Parse(typeof(Protocol), strProtocolType)) It's not possible also. So, it's possible to use in my case a switch statement or not?

    Read the article

  • Is window.location.href = 'some_page.html' followed by search engines?

    - by Arkaaito
    Currently our website uses links to allow the user to change their locale. The problem with this is that you get a lot of random outlinks from each page on the site to... the same page, in other languages. When a search engine traverses this, it gets an excessively complex view of the site. We were going to change it to a form post to avoid this. However, it seems to me that we should just be able to change it to an onclick="window.location.href='change_my_language.php'" rather than an href="change_my_language.php". Am I right? Or do the major search engines scan for and follow this sort of thing nowadays?

    Read the article

  • Is SHA sufficient for checking file duplication? (sha1_file in PHP)

    - by wag2639
    Suppose you wanted to make a file hosting site for people to upload their files and send a link to their friends to retrieve it later and you want to insure files are duplicated where we store them, is PHP's sha1_file good enough for the task? Is there any reason to not use md5_file instead? For the frontend, it'll be obscured using the original file name store in a database but some additional concerns would be if this would reveal anything about the original poster. Does a file inherit any meta information with it like last modified or who posted it or is this stuff based in the file system? Also, is using a salt frivolous since security in regards of rainbow table attack mean nothing to this and the hash could later be used as a checksum? One last thing, scalability? initially, it's only going to be used for small files a couple of megs big but eventually... Edit 1: The point of the hash is primarily to avoid file duplication, not to create obscurity.

    Read the article

  • Secure password transmission over unencrypted tcp/ip

    - by academicRobot
    I'm in the designing stages of a custom tcp/ip protocol for mobile client-server communication. When not required (data is not sensitive), I'd like to avoid using SSL for overhead reasons (both in handshake latency and conserving cycles). My question is, what is the best practices way of transmitting authentication information over an unencrypted connection? Currently, I'm liking SRP or J-PAKE (they generate secure session tokens, are hash/salt friendly, and allow kicking into TLS when necessary), which I believe are both implemented in OpenSSL. However, I am a bit wary since I don't see many people using these algorithms for this purpose. Would also appreciate pointers to any materials discussing this topic in general, since I had trouble finding any.

    Read the article

  • Are PDO prepared statements sufficient to prevent SQL injection?

    - by Mark Biek
    Let's say I have code like this: $dbh = new PDO("blahblah"); $stmt = $dbh->prepare('SELECT * FROM users where username = :username'); $stmt->execute( array(':username' => $_REQUEST['username']) ); The PDO documentation says The parameters to prepared statements don't need to be quoted; the driver handles it for you. Is that truly all I need to do to avoid SQL injections? Is it really that easy? You can assume MySQL if it makes a difference. Also, I'm really only curious about the use of prepared statements against SQL injection. In this context, I don't care about XSS or other possible vulnerabilities.

    Read the article

  • Android / Java rare and seemingly impossible exception causing force close

    - by Guzba
    Hello all, I have an interesting problem being reported to me from an android application I have published. I have a two-dimensional array that I am iterating through using two for loops like so: for (int i = 0; i < arr.length; ++i) { for (int j = 0; j < arr[i].length; ++j) { if (arr[i][j] != 0) // does stuff } } The problem is, somehow arr[i][j] != 0 is throwing an ArrayIndexOutOfBoundsException. But very rarely. I have thousands of people use the app on a daily basis and get maybe twenty force close reports. Is this something I can't avoid, maybe a problem with the phones memory, etc. or is there something I can do that I haven't thought of yet? Thanks.

    Read the article

  • How can I implement "real time" messaging on Google AppEngine?

    - by Freed
    I'm creating a web application on Google AppEngine where I want the user to be notified a quickly as possible after certain events occour. The problem is similar to say a chat server in that I need something happening on one connection (someone is writing a message in a chat room) to propagate to a number of other connections (other people in that chat room gets the message). To get speedy updates from the server to the client I'm planning on using long polling with XmlHttpRequest, hoping that AppEngine won't interfere other than possibly restriing the timeout. The real problem however is efficient notification between connections on AppEngine. Is there any support for this type of cross connection notification on AppEngine that does not involve busy-waiting? The only tools I can think of to do this at all is either using the data storage (slow) or memcache (unreliable), and none of them would let me avoid busy-waiting. Note: I know about XMPP support on AppEngine. It's related, but I want a browser based solution, sending messages to the users by XMPP is not an option.

    Read the article

  • scala jrebel superclass change

    - by coubeatczech
    hi, I'm using JRebel with Scala and I'm quite frequently experiencing the need for restart of server due to the fact that JRebel is unable to load a class if the superclass was changed. This is done mainly when I change anonymous functions as I can deduce from the JRebel error desription: Class 'mypackage.NewBook$$anonfun$2' superclass was changed from 'scala.runtime.AbstractFunction1' to 'scala.runtime.AbstractFunction2' and could not be reloaded. Is there any way, how can I design my code to avoid this? Does scala compiler take the functions, numbers them from one as they appear in source code?

    Read the article

  • SQL Server db_owner

    - by andrew007
    Hi, in my SQL2008 I have a user which is in the "db_datareader", "db_datawriter" and "db_ddladmin" DB roles, however when he tries to modify a table with SSMS he receives a message saying: You are not logged in as the database owner or system administrator. You might not be able to save changes to tables that you do not own. Of course, I would like to avoid such message, but until now I did find the way... Therefore, I try to modify the user by adding him to the "db_owner" role, and of course I do not have the message above. My question is: Is it possible to keep the user in the "db_owner" role, but deny some actions like alter user or ? I try "alter any user" securable on DB level, but it does not work... THANKS!

    Read the article

  • Python: Pickling highly-recursive objects without using `setrecursionlimit`

    - by cool-RR
    I've been getting RuntimeError: maximum recursion depth exceeded when trying to pickle a highly-recursive tree object. Much like this asker here. He solved his problem by setting the recursion limit higher with sys.setrecursionlimit. But I don't want to do that: I think that's more of a workaround than a solution. Because I want to be able to pickle my trees even if they have 10,000 nodes in them. (It currently fails at around 200.) (Also, every platform's true recursion limit is different, and I would really like to avoid opening this can of worms.) Is there any way to solve this at the fundamental level? If only the pickle module would pickle using a loop instead of recursion, I wouldn't have had this problem. Maybe someone has an idea how I can cause something like this to happen, without rewriting the pickle module? Any other idea how I can solve this problem will be appreciated.

    Read the article

  • How do you resolve the common collsision between type name and object name?

    - by Catskul
    Since the convention is to capitalize the first letter of public properties, the old c++ convention of initial capital for type names, and initial lowercase for non-type names does not prevent the classic name collision class FooManager { public BarManager BarManager { get; set; } // Feels very wrong. // Recommended naming convention? public int DoIt() { return Foo.Blarb + Foo.StaticBlarb; // 1st and 2nd Foo are two // different symbols } } class BarManager { public int Blarb { get; set; } public static int StaticBlarb { get; set; } } It seems to compile, but feels so wrong. Is there a recommend naming convention to avoid this?

    Read the article

  • Remove .php extension (explicitly written) for friendly URL

    - by miquel
    htaccess to remove the .php extension of my site's files. RewriteEngine on RewriteBase / RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteRule ^(.*)$ $1.php [L,QSA] Now, if I go to my site www.mysite.com/home works fine, it redirects to home.php but the URL is still friendly. But if I write this URL: www.mysite.com/home.php The home.php is served, and the URL is not friendly. How can I avoid this behavior? I want that if the user writes www.mysite.com/home.php, the URL displayed in the URL bar be www.mysite.com/home

    Read the article

  • Exposing a "dumbed-down", read-only instance of a Model in GAE

    - by Blixt
    Does anyone know a clever way, in Google App Engine, to return a wrapped Model instance that only exposes a few of the original properties, and does not allow saving the instance back to the datastore? I'm not looking for ways of actually enforcing these rules, obviously it'll still be possible to change the instance by digging through its __dict__ etc. I just want a way to avoid accidental exposure/changing of data. My initial thought was to do this (I want to do this for a public version of a User model): class ReadOnlyUser(db.Model): display_name = db.StringProperty() @classmethod def kind(cls): return 'User' def put(self): raise SomeError() Unfortunately, GAE maps the kind to a class early on, so if I do ReadOnlyUser.get_by_id(1) I will actually get a User instance back, not a ReadOnlyUser instance.

    Read the article

  • Copy of Access mdb database being updated by live database

    - by James
    I'm trying to compute statistics for data held in an Access .mdb database. In order to avoid interfering with the live database, I'm working from a copy which I made by simply using copy-paste in Windows Explorer. The copy resides in the same directory, but with a different name. I'm using R and RODBC to connect to the copy of the file. The strange thing is that new data that is being updated on the original live database is appearing in my queries. This is despite the file timestamps of the copy not changing at all. It is also causing some slowdown in the live database. My understanding is that the .mdb files are standalone, or is this not the case? Should I have copied the database in a different way?

    Read the article

  • Twitter search c#

    - by Lily
    Hi, I have implemented a method which manually scrapes the Search Twitter page and gets the tweets on different pages. But since there is a fast refresh rate, the method triggers an exception. Therefore I have decided to use TweetSharp API instead var search = FluentTwitter.CreateRequest() .AuthenticateAs(TWITTER_USERNAME, TWITTER_PASSWORD) .Users().SearchFor("dumbledore"); var result = search.Request(); var users = result.AsUsers(); this code was on the site. Does anyone know how I can avoid giving my credentials and retrieve from all users and not just the ones I have as friends? Thanks!

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >