Search Results

Search found 19908 results on 797 pages for 'bit ly'.

Page 684/797 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • How to get the real, actual duration of an MP3 file (VBR or CBR) server-side

    - by Cummander Checkov
    I used to calculate the duration of MP3 files server-side using ffmpeg - which seemed to work fine. Today i discovered that some of the calculations were wrong. Somehow, for some reason, ffmpeg will miscalculate the duration and it seems to happen with variable bit rate mp3 files only. When testing this locally, i noticed that ffmpeg printed two extra lines in green. Command used: ffmpeg -i song_9747c077aef8.mp3 ffmpeg says: [mp3 @ 0x102052600] max_analyze_duration 5000000 reached at 5015510 [mp3 @ 0x102052600] Estimating duration from bitrate, this may be inaccurate After a nice, warm google session, i found some posts on this, but no solution was found. I then tried to increase the maximum duration: ffmpeg -analyzeduration 999999999 -i song_9747c077aef8.mp3 After this, ffmpeg returned only the second line: [mp3 @ 0x102052600] Estimating duration from bitrate, this may be inaccurate But in either case, the calculated duration was just plain wrong. Comparing it to VLC i noticed that here the duration is correct. After more research i stumbled over mp3info - which i installed and used. mp3info -p "%S" song_9747c077aef8.mp3 mp3info then returned the CORRECT duration, but only as an integer, which i cannot use as i need a more accurate number here. The reason for this was explained in a comment below, by user blahdiblah - mp3info is simply pulling ID3 info from the file and not actually performing any calculations. I also tried using mplayer to retrieve the duration, but just as ffmpeg, mplayer is returning the wrong value. Now i ran out of options. If somebody knows how to get around this, any hints, tips, guides or corrections are welcome! Thank You!

    Read the article

  • Newbie - eclipse workflow (PHP development)

    - by engil
    Hi all - this is a bit of a newbie question but hoping I can get some guidance. I've been playing around with Eclipse for a couple months yet I'm still not completely comfortable with my setup and it seems like every time I install it to a new system I end up with different results. What I'm hoping to achieve is (I think) fairly standard. In my environment I'd like SVN (currently using Subclipse), FTP support (currently using Aptana plugin), debugging (going to use XDebug) and all the usual bells and whistles of development (code completion, refactoring, etc.) My biggest current issue is how to set up my environment to support both a 'development' and 'production' server. Optimally I would be able to work directly against the dev server (Eclipse on my Vista desktop against the VM Ubuntu dev server) and then push to production server (shared hosting). I'd prefer to work directly against the dev server (with no local project files, just using the Connections provided by Aptana) but I'm guessing this won't allow for code-completoin or all the other bells and whistles provided for development. Any thoughts? Kind of an open ended question, but maybe this could be an opportunity for some of you with a great deal of experience using Eclipse to describe your setups so people like me can get some insight into good ways to get set up.

    Read the article

  • codeigniter cron job with http access

    - by user1313850
    Sorry if this is a duplicate question...I've searched around and found similar advice but nothing that helps my exact problem. And please excuse the noob questions, CRON is a new thing for me. I have a codeigniter script that scrapes the html DOM of another site and stores some of that in a database. I'd like to run this script at a regular interval. This has lead me to looking into cron jobs. The page I have is at myserver.com/index.php/update I realize I can run a cron job with curl and run this page. If I want to be a bit more secure I can put a string at the end like: myserver.com/index.php/update/asdfh2784fufds And check for that in my CI controller. This seems like it would be mostly secure, but doesn't seem like the "right" way to do things. I've looked into running CI from the command line, and can execute basic pages like: php index.php mycontroller But when I try to do: php index.php update It doesn't work. I suspect this is because it needs to use HTTP to scrape the DOM of the outside page. So, my question: How do I securely run a codeigniter script with a cron job that needs HTTP access?

    Read the article

  • CakePHP dropping session between pages

    - by DavidYell
    Hi, I have an application with multiple regions and various incoming links. The premise, well it worked before, is that in the app_controller, I break out these incoming links and set them in the session. So I have a huge beforeFilter() in my *app_controller* which catches these and sets two variables in the session. Viewing.region and Search.engine, no problem. The problem arises that the session does not seem to be persistant across page requests. So for example, going to /reviews/write (userReviews/add) should have a session available which was set when the user arrived at the site. Although it seems to have vanished! It would appear that unless $this-params is caught explicitly in the *app_controller* and a session variable written, it does not exist on other pages. So far I have tried, swapping between storing session in 'cake' and 'php' both seem to exhibit the same behaviour. I use 'php' as a default. My Session.timeout is '120', Session.checkAgent is False and Security.level is 'low'. All of which should give enough leniency to the framework to allow sessions the most room to live! I'm a bit stumped as to why the session seems to be either recreated or blanked when a new page is being requested. I have commented out the requestAction() calls to make sure that isn't confusing the session request object also, which doesn't seem to make a difference. Any help would be great, as I don't have to have to recode the site to pass all the various variables via parameters in the url, as that would suck, and it's worked before, thus switching on $this-Session-read('Viewing.region') in all my code!

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • Replacing characters in a non well-formed XML body

    - by ryanprayogo
    In a (Java) code that I'm working on, I sometimes deal with a non well-formed XML (represented as a Java String), such as: <root> <foo> bar & baz < quux </foo> </root> Since this XML will eventually need to be unmarshalled (using JAXB), obviously this XML as is will throw exception upon unmarshalling. What's the best way to replace the & and the < to its character entities? For &, it's as easy as: xml.replaceAll("&", "&amp;") However, for the < symbol, it's a bit tricky since obviously I don't want to replace the < that's used for the XML tag opening 'bracket'. Other than scanning the string and manually replacing < in the XML body with &lt;, what other option can you suggest?

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • calling a function from a set of overloads depending on the dynamic type of an object

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • Fatal error: Call to a member function escape() on a non-object in .....on line 10

    - by danyo
    i am making a simple javascript login form for wordpress. i have the form submitting to the following bit of php to handle the login: <?php get_header(); global $user_ID; if (!$user_ID) { if($_POST){ //We shall SQL escape all inputs $username = $wpdb->escape($_REQUEST['username']); $password = $wpdb->escape($_REQUEST['password']); $remember = $wpdb->escape($_REQUEST['rememberme']); if($remember) $remember = "true"; else $remember = "false"; $login_data = array(); $login_data['user_login'] = $username; $login_data['user_password'] = $password; $login_data['remember'] = $remember; $user_verify = wp_signon( $login_data, false ); //wp_signon is a wordpress function which authenticates a user. It accepts user info parameters as an array. if ( is_wp_error($user_verify) ) { echo "<span class='error'>Invalid username or password. Please try again!</span>"; exit(); } else { echo "<script type='text/javascript'>window.location='". get_bloginfo('url') ."'</script>"; exit(); } } else { //get_header(); ?> any ideas on why i am getting the error? Cheers, Dan

    Read the article

  • Database/Object Mapping

    - by Eric
    Hello everyone, This is a beginner question, but it's been frustrating me... I am using C#, by the way. I'd like to make a few classes, each with their own properties and methods. I would also like to have a database to store certain instances of these classes in case I would ever need to look at them again. So, for example... class Polygon { String name; Double perimiter; int numSides; public Double GetArea() { // ... } } class Circle { String name; Double radius; public void PrintName() { // ... } } Say I've got these classes. I also want a database that has the TABLES "Polygon" and "Circle" with the COLUMNS "name" "perimeter" "radius" etc. And I want an easy way to save a class instance into the database, or pull a class instance out of the database. I have previously been using MS Access for my database stuff, which I don't mind using, but I would prefer if nothing other than .NET need to be installed. I've been researching online a bit, but I wanted to get some opinions on here. I have looked at Linq-to-Sql, but it seems you need Sql-Server. Is this true? If so, I'd really rather not use it because I don't want to have to have it installed everywhere. Anway, I'm just fishing for some ideas/insights/suggestions/etc. so please help me out if you can. Thanks.

    Read the article

  • Why is Apache + Rails is spitting out two status headers for code 500?

    - by Daniel Beardsley
    I have a rails app that is working fine except for one thing. When I request something that doesn't exist (i.e. /not_a_controller_or_file.txt) and rails throws a "No Route matches..." exception, the response is this (blank line intentional): HTTP/1.1 200 OK Date: Thu, 02 Oct 2008 10:28:02 GMT Content-Type: text/html Content-Length: 122 Vary: Accept-Encoding Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Status: 500 Internal Server Error Content-Type: text/html <html><body><h1>500 Internal Server Error</h1></body></html> I have the ExceptionLogger plugin in /vendor, though that doesn't seem to be the problem. I haven't added any error handling beyond the custom 500.html in public (though the response doesn't contain that HTML) and I have no idea where this bit of html is coming from. So Something, somewhere is adding that HTTP/1.1 200 status code too early, or the Status: 500 too late. I suspect it's Apache because I get the appropriate HTTP/1.1 500 header (at the top) when I use Webrick. My production stack is as follows: Apache 2 Mongrel (5 instances) RubyOnRails 2.1.1 (happens in both 1.2 and 2.1.1) I forgot to mention, the error is caused by a "no route matches..." exception

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • jQuery UI Dialog Issue With IE

    - by Dan Appleyard
    I am using the new jQuery 1.3.2 and jQuery-ui-1.7 libraries along with the UI Dialog. I have a div tag with several form elements (textbox, checkbox, etc.) in it. Upon page load, jQuery shows the div as a dialog. This works absolutely fine in FF, but in IE, the height of the div is wrong. It is just showing the title bar a bit of the content. I explicitly set the height when creating the div. If I set the height option after opening the dialog, the height is corrected, but the content is blank (shows the top third of a textbox). If I allow the dialog to be resizable, if you resize it in IE it works fine, but I don't want to force IE users to resize just to see the contents. Any ideas? Here is the code I use to create the dialog: $('#dialogDiv').dialog({ bgiframe: true, height: 400, width: 620, modal: true, draggable: true, resizable: false, close: function(event, ui) { if($('#agree').val() != '1') location.href = 'somepage.html'; } });

    Read the article

  • What do you call a generalized (non-GUI-related) "Model-View-Controller" architecture?

    - by dcuccia
    I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario: Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller. So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions. I understand "M means Model data and the V means View" - but do you call the superset pattern? Component-Commander-Controller? Whence can I exhume examples exemplary?

    Read the article

  • Should I put actors in the Domain-Model/Class-Diagram?

    - by devoured elysium
    When designing both the domain-model and class-diagrams I am having some trouble understanding what to put in them. I'll give an example of what I mean: I am doing a vacations scheduler program, that has an Administrator and End-Users. The Administrator does a couple of things like registering End-Users in the program, changing their previleges, etc. The End-User can choose his vacations days, etc. I initially defined an Administrator and End-User as concepts in the domain-model, and later as classes in the class-diagram. In the class-diagram, both classes ended up having a couple of methods like Administrator.RegisterNewUser(); Administrator.UnregisterUser(int id); etc. Only after some time I realised that actually both Administrator and End-User are actors, and maybe I got this design totally wrong. Instead of filling Administrator and End-User classes with methods to do what my Use-Cases request, I could define other classes from the domain to do them, and have controllers handle the Use-Cases(actually, I decided to do one for each Use-Case). I could have a UserDatabase.RegisterNewUser() and UserDatabase.UnregisterUser(int id);, for example, instead of having those methods on the Administrator class. The idea would be to try to think of the whole vacation-scheduler as a "closed-program" that has a set of features and doesn't bother with things such as authentication, that should be internal/protected, being that the only public things I'd let the outside world see would be its controllers. Is this the right approach? Or am I getting this totally wrong? Is it generally bad idea to put Actors in the domain-model/class-diagrams? What are good rules of thumb for this? My lecturer is following Applying UML and Patterns, which I find awful, so I'd like to know where I could look up more info on this described actor-models situation. I'm still a bit confused about all of this, as this new approach is radically different from anything I've done before.

    Read the article

  • "2d Search" in Solr or how to get the best item of the multivalued field 'items'?

    - by Karussell
    The title is a bit awkward but I couldn't found a better one. My problem is as follows: I have several users stored as documents and I am storing several key-value-pairs or items (which have an id) for each document. Now, if I apply highlighting I can get the first n items. If you have several hundreds of such items this highlighting is necessary and works nicely. But there are two problems: The highlighted text won't contain the id and so retrieving additional information of the highlighted item text is ugly. E.g. you need to store the id in the text so that the highlighter returns it. Adding the id to the hl.fl parameter does not help. You will not get the most relevant n-items. You will get the first n items ... So how can I find the best items of a documents with multiple such items? I will now add my own findings as answers, but as I will point out. Each of them has its drawbacks. Hopefully anyone of you can point me to a better solution.

    Read the article

  • Spreadsheet_Excel_Writer large data output is damaged

    - by dr3w
    I use Spreadsheet_Excel_Writer to generate .xls file and it works fine until I have to deal with a large amount of data. On certain stage it just writes some nonsense chars and quits filling certain columns. However some columns are field up to the end (generally numeric data) I'm not quite sure how the xls document is formed: row by row, or col by col... Also it is obviously not an error in a string, because when i cut out some data, the error appears a little bit further. I think there is no need in all of my code here are some essentials $filename = 'file.xls'; $workbook = & new Spreadsheet_Excel_Writer(); $workbook->setVersion(8); $contents =& $workbook->addWorksheet('Logistics'); $contents->setInputEncoding('UTF-8'); $workbook->send($filename); //here is the part where I write data down $contents->write(0, 0, 'Field A'); $contents->write(0, 1, 'Field B'); $contents->write(0, 2, 'Field C'); $ROW=1; foreach($ordersArr as $key=>$val){ $contents->write($ROW, 0, $val['a']); $contents->write($ROW, 1, $val['b']); $contents->write($ROW, 2, $val['c']); $ROW++; } $workbook->close();

    Read the article

  • DCI: How to implement Context with Dependency Injection?

    - by ciscoheat
    Most examples of a DCI Context are implemented as a Command pattern. When using Dependency Injection though, it's useful to have the dependencies injected in the constructor and send the parameters into the executing method. Compare the Command pattern class: public class SomeContext { private readonly SomeRole _someRole; private readonly IRepository<User> _userRepository; // Everything goes into the constructor for a true encapsuled command. public SomeContext(SomeRole someRole, IRepository<User> userRepository) { _someRole = someRole; _userRepository = userRepository; } public void Execute() { _someRole.DoStuff(_userRepository); } } With the Dependency injected class: public class SomeContext { private readonly IRepository<User> _userRepository; // Only what can be injected using the DI provider. public SomeContext(IRepository<User> userRepository) { _userRepository = userRepository; } // Parameters from the executing method public void Execute(SomeRole someRole) { someRole.DoStuff(_userRepository); } } The last one seems a bit nicer, but I've never seen it implemented like this so I'm curious if there are any things to consider.

    Read the article

  • Using before_create in Rails to normalize a many to many table

    - by weotch
    I am working on a pretty standard tagging implementation for a table of recipes. There is a many to many relationship between recipes and tags so the tags table will be normalized. Here are my models: class Recipe < ActiveRecord::Base has_many :tag_joins, :as => :parent has_many :tags, :through => :tag_joins end class TagJoin < ActiveRecord::Base belongs_to :parent, :polymorphic => true belongs_to :tag, :counter_cache => :usage_count end class Tag < ActiveRecord::Base has_many :tag_joins, :as => :parent has_many :recipes, :through => :tag_joins, :source => :parent , :source_type => 'Recipe' before_create :normalizeTable def normalizeTable t = Tag.find_by_name(self.name) if (t) j = TagJoin.new j.parent_type = self.tag_joins.parent_type j.parent_id = self.tag_joins.parent_id j.tag_id = t.id return false end end end The last bit, the before_create callback, is what I'm trying to get working. My goal is if there is an attempt to create a new tag with the same name as one already in the table, only a single row in the join table is produced, using the existing row in tags. Currently the code dies with: undefined method `parent_type' for #<Class:0x102f5ce38> Any suggestions?

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • find nearest match to array of doubles

    - by Scott
    Given the code below, how do I compare a List of objects's values with a test value? I'm building a geolocation application. I'll be passing in longitude and latitude and would like to have the service answer back with the location closest to those values. I started down the path of converting to a string, and formatting the values down to two decimal places, but that seemed a bit too ghetto, and I'm looking for a more elegant solution. public class Location : IEnumerable { public string label { get; set; } public double lat { get; set; } public double lon { get; set; } //Implement IEnumerable public IEnumerator GetEnumerator() { return (IEnumerator)this; } } [HandleError] public class HomeController : Controller { private List<Location> myList = new List<Location> { new Location { label="Atlanta Midtown", lon=33.657674, lat=-84.423130}, new Location { label="Atlanta Airport", lon=33.794151, lat=-84.387228}, new Location { label="Stamford, CT", lon=41.053758, lat=-73.530979}, ... } public static int Main(String[] args) { string inLat = "-80.987654"; double dblInLat = double.Parse(inLat); // here's where I would like to find the closest location to the inLat // once I figure out this, I'll implement the Longitude, and I'll be set }

    Read the article

  • Does Monitor.Wait ensure that fields are re-read?

    - by Marc Gravell
    It is generally accepted (I believe!) that a lock will force any values from fields to be reloaded (essentially acting as a memory-barrier or fence - my terminology in this area gets a bit loose, I'm afraid), with the consequence that fields that are only ever accessed inside a lock do not themselves need to be volatile. (If I'm wrong already, just say!) A good comment was raised here, questioning whether the same is true if code does a Wait() - i.e. once it has been Pulse()d, will it reload fields from memory, or could they be in a register (etc). Or more simply: does the field need to be volatile to ensure that the current value is obtained when resuming after a Wait()? Looking at reflector, Wait calls down into ObjWait, which is managed internalcall (the same as Enter). The scenario in question was: bool closing; public bool TryDequeue(out T value) { lock (queue) { // arbitrary lock-object (a private readonly ref-type) while (queue.Count == 0) { if (closing) { // <==== (2) access field here value = default(T); return false; } Monitor.Wait(queue); // <==== (1) waits here } ...blah do something with the head of the queue } } Obviously I could just make it volatile, or I could move this out so that I exit and re-enter the Monitor every time it gets pulsed, but I'm intrigued to know if either is necessary.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >