Search Results

Search found 8463 results on 339 pages for 'bad learner'.

Page 198/339 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • python: how to jump to a particular line in a huge text file?

    - by photographer
    Are there any alternatives to the code below: startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 if I'm processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.

    Read the article

  • Your experience on using configuration & VCS tools

    - by smalldream
    I am doing my study's final year project and would like to do a little survey here. The topic is about configuration management and version control system for an industrial product (such as a piece of software, a furniture design, a car engine or even an aeroplane design etc...) 1.) What is you field of expertise (IT, engineering, manufactuing etc..) and what is the configuration management and version control system you use (previously or now) for your work? 2.) What is your opinion/comment (good, bad, what is it lack of or what can be improve etc...) about them? Much appreciated if you can include some real life examples for your opinion/comment. Of course you are welcome also if you simply wish to share your thought on the current configuration management and version control system in market. Thanks all in advance for your help.

    Read the article

  • Dispelling the UIImage imageNamed: FUD

    - by Roger Nolan
    I see a lot of people saying imageNamed is bad but equal numbers of people saying the performance is good - especially when rendering UITableViews. See this SO question for example or this article on iPhoneDeveloperTips.com UIImage's imageNamed method used to leak so it was best avoided but has been fixed in recent releases. I'd like to understand the caching algorithm better in order to make a reasoned decision about where I can trust the system to cache my images and where I need to go the extra mile and do it myself. My current basic understanding is that it's a simple NSMutableDictionary of UIImages referenced by filename. It gets bigger and when memory runs out it gets a lot smaller. For example, does anyone know for sure that the image cache behind imageNamed does not respond to didReceiveMemoryWarning? It seems unlikely that Apple would not do this. If you have any insight into the caching algorithm, please post it here.

    Read the article

  • Are database triggers evil?

    - by WW
    Are database triggers a bad idea? In my experience they are evil, because they can result in surprising side effects, and are difficult to debug (especially when one trigger fires another). Often developers do not even think of looking if there is a trigger. On the other hand, it seems like if you have logic that must occur evertime a new FOO is created in the database then the most foolproof place to put it is an insert trigger on the FOO table. The only time we're using triggers is for really simple things like setting the ModifiedDate.

    Read the article

  • Replacement for PHP's __autoload function?

    - by Josh
    I have read about dynamically loading your class files when needed in a function like this: function __autoload($className) { include("classes/$className.class.php"); } $obj = new DB(); Which will automatically load DB.class.php when you make a new instance of that class, but I also read in a few articles that it is bad to use this as it's a global function and any libraries that you bring into your project that have an __autoload() function will mess it up. So does anyone know of a solution? Perhaps another way to achieve the same effect as __autoload()? Until I find a suitable solution I'll just carry on using __autoload() as it doesn't start becoming a problem until you bring in libraries and such. Thanks.

    Read the article

  • privmsg system db schema

    - by Bartek
    I'm making a PM-system on my site. And I want to know ultimate db schema. I have always just used only 1 table. But my users have started complained that the messages in their outbox suddently dissapers =D Thats because if the other users deletes it, the one who sent it wont see it to. So im thinking of making another table with the same fields So im thinking something like this: privmsgs id | to | from | subject | message | date -- -- ---- ------- ------- ---- 1 76 893 blabla. blabla. 20100404 sent_msgs id | to | from | subject | message | date -- -- ---- ------- ------- ---- 1 76 893 blabla. blabla. 20100404 Whatya think? Sorry for my bad english

    Read the article

  • Logging which is the best way

    - by Tony
    Hi People who talk about loggers here never talke about EventLog, I think this is good for windows system. Is it reliable, or I found it dead in some bad morning? Why not logging everything at SQLServer, I am creating E-Commerce website, if SQL server down the website will be down anyway. but I am worry about temporally connection failure, what do u think? Why everyone like files, it can be in great size, too big to handle, or maybe I will create another file when a file is too big, and I can create a file with a date. Some one tried MS Enterprise library? talk to me about it. Thanks

    Read the article

  • iPhone modal view inside another modal view?

    - by Rick
    My App uses a modal view when users add a new foo. The user selects a foo type using this modal view. Depending on what type is selected, the user needs to be asked for more information. I'd like to use another modal view to ask for this extra information. I've tried to create the new modal view like the first one (which works great) and it leads to stack overflow/“Loading Stack Frames” error in Xcode. Am I going about this in completely the wrong way i.e. is this just a really bad idea? Should I rethink the UI itself? UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:addController]; [self presentModalViewController:navigationController animated:YES];

    Read the article

  • How do I design a cryptographic hash function?

    - by Eyal
    After reading the following about why one-way hash functions are one-way, I would like to know how to design a hash function. http://stackoverflow.com/questions/1038307/help-me-better-understand-cryptographic-hash-functions/1047106#1047106 Before everyone gets on my case: Yes, I know that it's a bad idea to not use a proven and tested hash function. I would still like to know how it's done. I'm familiar with Feistel-network ciphers but those are necessarily reversible, horrible for a cryptographic hash. Is there some sort of construction that is well-used in cryptographic hashing? Something that makes it very one-way?

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • are there any negative implications of sourcing a javascript file that does not actually exist?

    - by dreftymac
    If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this? Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path. Since no errors are displayed to the user, should this practice be considered a viable option?

    Read the article

  • Display PDF in Html

    - by anil
    Hi, i want to show PDF in a view in MVC, following function return file public ActionResult TakeoffPlans(string projID) { Highmark.BLL.Models.Project proj = GetProject(projID); List ff = proj.GetFiles(Project_Thin.Folders.CompletedTakeoff, false); ViewData["HasFile"] = "0"; if (ff != null && ff.Count 0 && ff.Where(p = p.FileExtension == "pdf").Count() 0) { ViewData["HasFile"] = "1"; } ViewData["ProjectID"] = projID; ViewData["Folder"] = Project_Thin.Folders.CompletedTakeoff; //return View("UcRenderPDF"); string fileName = Server.MapPath("~/Content/Project List Update 2.pdf"); return File(fileName, "application/pdf", Server.HtmlEncode(fileName)); } but it display some bad data in view, please help me on this

    Read the article

  • Objectdatasource and Gridview : Sorting, paging, filtering

    - by Simon
    Hi there, Im using entity framework 1.0 and trying to feed out a Gridview with a objectdatasource that have access to my facade. The problem is, that it seems to be particulary difficult and haven't seen anything that realy do what i want it to do on the internet. For those who know, a gridview feeded with an objectdatasource, it can't sort automaticaly then you must do it manually. It's not that bad. Where it becomes a nightmare, its when we add paging and filter settings to a gridview's datasource. After many hours searching on the internet, i'm asking you, guys, if anyone knows a link that can explain me how to mix Pagging, Sorting and filtering for a gridview and an objectdatasource! Thanks in advance and sorry for my english.

    Read the article

  • Linq To Sql: Compiled Queries and Extension Methods

    - by Beni
    Hi community, I'm interessted, how does Linq2Sql handles a compiled query, that returns IQueryable. If I call an extension method based on a compiled query like "GetEntitiesCompiled().Count()" or "GetEntitiesCompiled().Take(x)". What does Linq2Sql do in the background? This would be very bad, so in this situation I should write a compiled query like "CountEntitiesCompiled". Does he load the result (in this case "GetEntitiesCompiled()") into the memory (mapped to the entity class like "ToList()")? So what situations make sense, when the compiled queries return IQueryable, that query is not able to modify, before request to the Sql-Server. So in my opinion I can just as good return List. Thanks for answers!

    Read the article

  • In LaTeX prefer figures on text-heavy pages.

    - by bjarkef
    Hi LaTeX seems to have a preference for placing figures together on a page, and placing surrounding text on a separate page. Can I somehow change that balance a bit, as I prefer figures to break up the text to avoid too black text-heavy pages. Example: \section{Some section} [Half a page of text] \begin{figure} [...] \caption{Figure text 1} \end{figure} [Half a page of text] \begin{figure} [...] \caption{Figure text 2} \end{figure} [More text] So what LaTeX usually does is to stack the two half pages of text on a single page, and the figures on the following page. I believe this really gives a bad balance, and bores the reader. So can I change that somehow? I know about postfixing the \begin{figure} with [ht!], but often it does not really matter. I would like to configure the balancing algorithms in LaTeX to naturally prefer pages with combined figures and text.

    Read the article

  • Chipmunk warning still present with --release

    - by Kaliber64
    I'm using Python27 on Windows 7 64-bit. I downloaded the source for Chipmunk 6.2.x and compiled Pymunk with --release and -c ming32. Almost zero problems. Lots of path not found cause I'm bad. All prints seem to have disappeared but I get spammed with EPA iteration warnings. I've seen a couple discussions but no solutions. Possible chipmunk betas to fix the float errors causing the double truths causing the warning. I picked the latest stable version I think. My program is seriously bogged down with all the prints. class NullDevice(): def write(self, s): pass sys.stdout=NullDevice() Does not disable the C prints .< Any help?

    Read the article

  • Handling over-long UTF-8 sequences

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle over-long utf8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long utf8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Best way to structure AJAX for a Zend Framework application

    - by John Nall
    Sorry, but there's a lot of outdated and just plain bad information for Zend Framework, since it has changed so much over the years and is so flexible. I thought of having an AJAX module service layer, with controllers and actions that interact with my model. Easy, but not very extensible and would violate DRY. If I change the logistics of some process I'll have to edit the AJAX controllers and the normal controllers. So ideally I would load the exact same actions for both javascript and non-javascript users. I have thought about maybe checking for $_POST['ajax'], if it is set I would load a different (json'y) view for the data. Was wondering how/a good way to do this (front controller plugin I imagine?) or if someone can point me to an UP TO DATE tutorial that describes a really good way for building a larger ajax application. thx

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • A web framework where AJAX was not an after thought

    - by Pirate for Profit
    AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right?

    Read the article

  • Re-factoring a CURL request to Ruby's RestClient

    - by user94154
    I'm having trouble translating this CURL request into Ruby using RestClient: system("curl --digest -u #{@user}:#{@pass} '#{@endpoint}/#{id}' --form image_file=@'#{path}' -X PUT") I keep getting 400 Bad Request errors. As far as I can tell, the request does get properly authenticated, but hangs up from the file upload part. Here are my best attempts, all of which get me those 400 errors: resource = RestClient::Resource.new "#{@endpoint}/#{id}", @user, @pass #attempt 1 resource.put :image_file => File.new(path, 'rb'), :content_type => 'image/jpg' #attempt 2 resource.put File.read(path), :content_type => 'image/jpg' #attempt 3 resource.put File.open(path) {|f| f.read}, :content_type => 'image/jpg'

    Read the article

  • How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

    - by ccomet
    I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary. I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Delibrately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions. Thank you in advance!

    Read the article

  • Including inline javascript using content_for in rails

    - by TenJack
    I am using content_for and yeild to inject javascript files into the bottom of my layout but am wondering what the best practice is for including inline javascript. Specifically I'm wondering where the put the script type declaration: <% content_for :javascript do %> <script type="text/javascript"> ... </script> <% end %> or <% content_for :javascript do %> ... <% end %> <script type="text/javascript"> <%= yield :javascript %> </script> <% end %> I am using the first option now and wondering if it is bad to include multiple ... declarations within one view. Sometimes I have partials that lead to this.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >