Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 772/903 | < Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >

  • Django Database design -- Is this a good stragety for overriding defaults

    - by rh0dium
    Hi SO's I have a question on good database design practices and I would like to leverage you guys for pointers. The project started out simple. Hey we have a bunch of questions we want answered for every project (no problem) Which turned into... Hey we have so many questions can we group them into sections (yup we can do that) Which lead into.. Can we weight these questions and I don't really want some of these questions for my project (Yes but we are getting difficult) And then I'm thinking they will want to have each section have it's own weight.. Requirements So there's the requirements - For n number of project Allow a admin member the ability select the questions for a project Allow the admin member to re-weigh or use the default weights for the questions Allow the admin member to re-weight the sections Allow team members to answer the questions. So here is what I came up with. Please feel free to comment and provide better examples models.py from django.db import models from django.contrib.sites.models import Site from django.conf import settings class Section(models.Model): """ This describes the various sections for a checklist: """ name = models.CharField(max_length=64) description = models.TextField() class Question(models.Model): """ This simply provides a simple way to list out the questions. """ question = models.CharField(max_length=255) answer_type = models.CharField(max_length=16) description = models.TextField() section = models.ForeignKey(Section) class ProjectQuestion(models.Model): """ These are the questions relevant to the project """ question = models.ForeignKey(Question) answer = models.CharField(max_length=255) required = models.BooleanField(default=True) weight = models.FloatField(default = XXX) class Project(models.Model): """ Here is where we want to gather our questions """ questions = models.ManyToManyField(ProjectQuestion) Immediate questions: - When I start a project - any ideas on how to "pre-populate" the questions (and ultimately the weights) for the project? - Is there a generally accepted method for doing this process that I am missing? Basically the idea that you refer to the questions overide your own default weight, and store the answer? - It appears that a good chuck of the work will be done in the views and that a lot of checking will need to occur there? Is that OK? Again - feel free to give me better strategies!! Thanks

    Read the article

  • Transitioning from FlexBuilder 3 to FlashBuilder 4 ... there and back again.

    - by Robusto
    It's growing pains time again. Some of our stuff requires FlashBuilder 4 and some still requires FlexBuilder 3. Both are installed OK, and no projects use both IDEs. The trouble is, when I go back to work on a FlexBuilder 3 project it takes freakin' forever to build and I get weird errors like these: This doesn't seem to cause any identifiable problems except to throw up a modal dialog at various points in the build process, forcing user interaction. But I do notice that memory fills up fast in FB3 and generally FB3 starts behaving strangely and ultimately quits once it gets up over 700MB. This is only a temporary bridge situation until we get all projects into FB4, but "temporary" could mean weeks if not months. Does anyone have any advice for how to get through this bridge period? Is there anything I can do to make these two IDEs work and play well together? Failing that, does anyone know what "java.lang.String" is the "reason" for the problem? Does Eclipse have a resource bundle somewhere that is getting corrupted when i go back and forth between the two?

    Read the article

  • use awk to identify multi-line record and filtering

    - by nanshi
    I need to process a big data file that contains multi-line records, example input: 1 Name Dan 1 Title Professor 1 Address aaa street 1 City xxx city 1 State yyy 1 Phone 123-456-7890 2 Name Luke 2 Title Professor 2 Address bbb street 2 City xxx city 3 Name Tom 3 Title Associate Professor 3 Like Golf 4 Name 4 Title Trainer 4 Likes Running Note that the first integer field is unique and really identifies a whole record. So in the above input I really have 4 records although I dont know how many lines of attributes each records may have. I need to: - identify valid record (must have "Name" and "Title" field) - output the available attributes for each valid record, say "Name", "Title", "Address" are needed fields. Example output: 1 Name Dan 1 Title Professor 1 Address aaa street 2 Name Luke 2 Title Professor 2 Address bbb street 3 Name Tom 3 Title Associate Professor So in the output file, record 4 is removed since it doen't have the "Name" field. Record 3 doesn't have Address field but still being print to the output since it is a valid record that has "Name" and "Title". Can I do this with awk? But how do i identify a whole record using the first "id" field on each line? Thanks a lot to the unix shell script expert for helping me out! :)

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Variable amount of columns returned in mysqli prepared statement

    - by manyxcxi
    I have a situation where a dynamic query is being generated that could select anywhere from 1 to over 300 different columns across multiple tables. It currently works fine just doing a query, however the issue I'm running into in using a prepared statement is that I do not know how to handle the fact that I don't know how many columns I will be asking for each time and therefor don't know how to process the results. The reason I believe a bind statement will help is because once this query is run once, it will most likely (though not always) be run again with the exact same parameters. Currently I have something like this: $rows = array(); $this->statement = $this->db->prepare($query); $this->statement->bind_param('i',$id); $this->statement->execute(); $this->statement->bind_result($result); while($this->statement->fetch()) { $rows[] = $result; } I know this doesn't work as I want it to, my question is how do I get the data back out of the query. Is it possible to bring the columns back in an associative array by column name, like a standard mysqli query?

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • Time delay an external RSS feed

    - by x3ja
    I subscribe to a number of RSS feeds, mostly from within my own timezone (UK: currently GMT+1, a.k.a BST). However I'm also interested in news from New Zealand (currently GMT+12). My problem is caused by my addiction to needing to keep my unread count at, or near, zero. When I load up my RSS reader in the mornings it has gathered all the NZ news at once (normally around 100 items) and I feel compelled either to read them all or to mark them all as read to feed my need for zero-unread-count. I figured a good solution to this would be to time delay the RSS feed somehow, so I would be drip-fed the stories at their time +12 hours, so I could read them through the day as they come in. So my question (or, rather, questions): Does such a thing exist currently & what is it? (no point reworking the wheel) If not: What would be the best way to approach doing this myself? I have access to a Linux web server on which I can run scripts, create databases, store files etc, so there should be a way... I'm most conversant in perl and have done a little fiddling with XML within that, so would naturally process ... or is there some simpler way to do it that I'm missing?

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • CentOS: make python 2.6 see django

    - by NP
    In a harrowing attempt to get mod_wsgi to run on CentOS 5.4, I've added python 2.6 as an optional library following the instructions here. The configuration seems fine except that when trying to ping the server the Apache log prints this error: mod_wsgi (pid=20033, process='otalo', application='127.0.0.1|'): Loading WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Target WSGI script '...django.wsgi' cannot be loaded as Python module. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Exception occurred processing WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] Traceback (most recent call last): [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] File "...django.wsgi", line 8, in [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] import django.core.handlers.wsgi [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] ImportError: No module named django.core.handlers.wsgi when I go to my python2.6 install's command line and try 'import django', the module is not found (ImportError). However, my default python 2.4 installation (still working fine) is able to import successfully. How do I point python 2.6 to django? Thanks in advance.

    Read the article

  • Passing an Ajax variable to a Codeigniter function

    - by Matt
    Hello, I think this is a simple one. I have a Codeigniter function which takes the inputs from a form and inserts them into a database. I want to Ajaxify the process. At the moment the first line of the function gets the id field from the form - I need to change this to get the id field from the Ajax post (which references a hidden field in the form containing the necessary value) instead. How do I do this please? My Codeigniter Controller function function add() { $product = $this->products_model->get($this->input->post('id')); $insert = array( 'id' => $this->input->post('id'), 'qty' => 1, 'price' => $product->price, 'size' => $product->size, 'name' => $product->name ); $this->cart->insert($insert); redirect('home'); } And the jQuery Ajax function $("#form").submit(function(){ var dataString = $("input#id") //alert (dataString);return false; $.ajax({ type: "POST", url: "/home/add", data: dataString, success: function() { } }); return false; }); As always, many thanks in advance.

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • General Address Parser for Freeform Text

    - by Daemonic
    We have a program that displays map data (think Google Maps, but with much more interactivity and custom layers for our clients). We allow navigation via a set of combo boxes that prefill certain fields with a bunch of data (ie: Country: Canada, the Province field is filled in. Select Ontario, and a list of Counties/Regions is filled in. Select a county/region, and a city is filled in, etc...). While this guarantees accurate addresses, it's a pain for the users if they don't know where a street address or a city are located (ie, which county/region is kitchener in?). So we are looking at trying to do an address parser with a freeform text field. The user could enter something like this (similar to Google Maps, Bing Maps, etc...): 22 Main St, Kitchener, On And we could compartmentalize it into sections and do lookups on the data and get to the point they are looking for (or suggest alternatives). The problem with this is that how do we properly compartmentalize information? How do we break up the sections and find possible matches? I'm guessing we wouldn't be guaranteed that the user would enter data in a format we always expected (obviously). A follow up to this would be how to present the data if we don't find an exact match (or find multiple exact matches... two cities with the same street name in different counties, for example). We have a ton of data available in the mapping data (mapinfo tab format mostly). So we can do quick scans of street names, cities, states, etc. But I'm not sure about the best way to go about approaching this problem. Sure, using Google Maps would be nice, bue most of our clients are in closed in networks where outside access is not usually allowed and most aren't willing to rely on google maps (since it doesn't contain as much information as they need, such as custom map layers). They could, obviously, go to google and get the proper location then move to our software, but this would time consuming and speed of the process can be quite important.

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • boost::asio::async_resolve Problem

    - by Moo-Juice
    Hi All, I'm in the process of constructing a Socket class that uses boost::asio. To start with, I made a connect method that took a host and a port and resolved it to an IP address. This worked well, so I decided to look in to async_resolve. However, my callback always gets an error code of 995 (using the same destination host/port as when it worked synchronously). code: Function that starts the resolution: // resolve a host asynchronously template<typename ResolveHandler> void resolveHost(const String& _host, Port _port, ResolveHandler _handler) const { boost::asio::ip::tcp::endpoint ret; boost::asio::ip::tcp::resolver::query query(_host, boost::lexical_cast<std::string>(_port)); boost::asio::ip::tcp::resolver r(m_IOService); r.async_resolve(query, _handler); }; // eo resolveHost Code that calls this function: void Socket::connect(const String& _host, Port _port) { // Anon function for resolution of the host-name and asynchronous calling of the above auto anonResolve = [this](const boost::system::error_code& _errorCode, boost::asio::ip::tcp::resolver_iterator _epIt) { // raise event onResolve.raise(SocketResolveEventArgs(*this, !_errorCode ? (*_epIt).host_name() : String(""), _errorCode)); // perform connect, calling back to anonymous function if(!_errorCode) connect(*_epIt); }; // Resolve the host calling back to anonymous function Root::instance().resolveHost(_host, _port, anonResolve); }; // eo connect The message() function of the error_code is: The I/O operation has been aborted because of either a thread exit or an application request And my main.cpp looks like this: int _tmain(int argc, _TCHAR* argv[]) { morse::Root root; TextSocket s; s.connect("somehost.com", 1234); while(true) { root.performIO(); // calls io_service::run_one() } return 0; } Thanks in advance!

    Read the article

  • How to hold payment in paygate for a while?

    - by Fero
    HI all, I have a query regarding holding the payment in PAYGATE PAYMENT GATEWAY. Here is the problem in brief. I am doing a website where the payment should be made only a certain members buy the product. For Example if there is an iPhone in my site, then that particular phone must be buy by certain quantities which given by admin. It may be done one by one user or a single user can buy all the quantities at a single time. In this case as a developer how can i able to hold the payment which received by user? Because i don't want to receive the payments until the certain quantities bought. Because if certain quantities were not buy i need to refund the money to their account. We don't like to do this process. That's why we are looking for holding the payment. Is it possible or what is the best way to solve this problem? Please let me know what is you professional opinion? thanks in advance... Please let me know what is you professional opinion?

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • How can I display multiple django modelformset forms together?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results for base_form, likes_form in map(None, base_forms, likes_forms): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements.

    Read the article

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • How to start AJAX in Zend?

    - by Awan
    I am working on some projects as a developer(PHP,MySQL) in which AJAX and jQuery is already implemented. But now I want to learn implementation of AJAX and jQuery stuff. Can anyone tell me the exact steps with explanation? I have created a project in Zend. There is only one controller(IndexController) and two actions(a and b) in my project now. Now I want to use ajax in my project. But I don't know how to start. I read some tutorial but unable to completely understand. I have index.phtml like this: <a href='index/a'>Load info A</a> <br/> <a href='index/b'>Load info B</a> <br /> <div id=one>load first here<div> <div id=two>load second here</div> Here index is controller in links. a and b are actions. now I have four files like this: a1.phtml I am a1 a2.phtml I am a2 b1.phtml I am b1 b2.phtml I am b2 I think you have got my point. When user clicks first link (Load info A) then a1.phtml should be loaded into div one and a2.phtml should be loaded into div two When user clicks second link (Load info B) then b1.phtml should be loaded into div one and b2.phtml should be loaded into div two And someone tell me the purpose of JSON in this process and how to use this also? Thanks

    Read the article

  • commons-exec: hanging when I call executor.execute(commandLine);

    - by Stefan Kendall
    I have no idea why this is hanging. I'm trying to capture output from a process run through commons-exec, and I continue to hang. I've provided an example program to demonstrate this behavior below. import java.io.DataInputStream; import java.io.IOException; import java.io.PipedInputStream; import java.io.PipedOutputStream; import org.apache.commons.exec.CommandLine; import org.apache.commons.exec.DefaultExecutor; import org.apache.commons.exec.ExecuteException; import org.apache.commons.exec.PumpStreamHandler; public class test { public static void main(String[] args) { String command = "java"; PipedOutputStream output = new PipedOutputStream(); PumpStreamHandler psh = new PumpStreamHandler(output); CommandLine cl = CommandLine.parse(command); DefaultExecutor exec = new DefaultExecutor(); DataInputStream is = null; try { is = new DataInputStream(new PipedInputStream(output)); exec.setStreamHandler(psh); exec.execute(cl); } catch (ExecuteException ex) { } catch (IOException ex) { } System.out.println("huh?"); } }

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • Python Ephem / Datetime calculation

    - by dassouki
    the output should process the first date as "day" and second as "night". I've been playing with this for a few hours now and can't figure out what I'm doing wrong. Any ideas? Edit I assume that the problem is due to my date comparison implementation Output: $ python time_of_day.py * should be day: event date: 2010/4/6 16:00:59 prev rising: 2010/4/6 09:24:24 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/7 09:22:27 next set: 2010/4/6 23:34:27 day * should be night: event date: 2010/4/6 00:01:00 prev rising: 2010/4/5 09:26:22 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/6 09:24:24 next set: 2010/4/6 23:34:27 day time_of_day.py import datetime import ephem # install from http://pypi.python.org/pypi/pyephem/ #event_time is just a date time corresponding to an sql timestamp def type_of_light(latitude, longitude, event_time, utc_time, horizon): o = ephem.Observer() o.lat, o.long, o.date, o.horizon = latitude, longitude, event_time, horizon print "event date ", o.date print "prev rising: ", o.previous_rising(ephem.Sun()) print "prev setting: ", o.previous_setting(ephem.Sun()) print "next rise: ", o.next_rising(ephem.Sun()) print "next set: ", o.next_setting(ephem.Sun()) if o.previous_rising(ephem.Sun()) <= o.date <= o.next_setting(ephem.Sun()): return "day" elif o.previous_setting(ephem.Sun()) <= o.date <= o.next_rising(ephem.Sun()): return "night" else: return "error" print "should be day: ", type_of_light('45.959','-66.6405','2010/4/6 16:01','-4', '-6') print "should be night: ", type_of_light('45.959','-66.6405','2010/4/6 00:01','-4', '-6')

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • MSI Installer start auto-repair when service starts

    - by Josh Clark
    I have a WiX based MSI that installs a service and some shortcuts (and lots of other files that don't). The shortcut is created as described in the WiX docs with a registry key under HKCU as the key file. This is an all users install, but to get past ICE38, this registry key has to be under the current user. When the service starts (it runs under the SYSTEM account) it notices that that registry key isn't valid (at least of that user) and runs the install again to "repair". In the Event Log I get MsiInstaller Events 1001 and 1004 showing that "The resource 'HKEY_CURRENT_USER\SOFTWARE\MyInstaller\Foo' does not exist." This isn't surprising since the SYSTEM user wouldn't have this key. I turned on system wide MSI logging and the auto-repair created its log file in the C:\Windows\Temp folder rather than a specific user's TEMP folder which seems to imply the current user was SYSTEM (plus the log file shows the "Calling process" to be my service). Is there something I can do to disable the auto-repair functionality? Am I doing something wrong or breaking some MSI rule? Any hints on where to look next?

    Read the article

  • Java: easiest way to package both Java 1.5 and 1.6 code

    - by WizardOfOdds
    I want to package a piece of code that absolutely must run on Java 1.5. There's one part of the code where the program can be "enhanced" if the VM is an 1.6 VM. Basically it's this method: private long[] findDeadlockedThreads() { // JDK 1.5 only supports the findMonitorDeadlockedThreads() // method, so you need to comment out the following three lines if (mbean.isSynchronizerUsageSupported()) return mbean.findDeadlockedThreads(); else return mbean.findMonitorDeadlockedThreads(); } What would be easiest way to have this compile on 1.5 and yet do the 1.6 method calls when on 1.6 ? In the past I've done something similar by compiling a unique 1.6 class that I would package with my app and instantiate using a ClassLoader when on 1.6 (because an 1.6 JVM is perfectly fine mixing 0x32 and 0x31 classes), but I think it's a bit overkill (and a bit painful because during the build process you have to build both 0x31 and 0x32 .class files). How should I go if I wanted to compile the above method on 1.5? Maybe using reflection but then how (I'm not familiar at all with reflection) Note: if you're curious, the above method comes from this article: http://www.javaspecialists.eu/archive/Issue130.html (but I don't want to "comment the three lines" like in the article, I want this to compile and run on both 1.5 and 1.6)

    Read the article

< Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >