Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 752/880 | < Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • how to determine if a character vector is a valid numeric or integer vector

    - by Andrew Barr
    I am trying to turn a nested list structure into a dataframe. The list looks similar to the following (it is serialized data from parsed JSON read in using the httr package). myList <- list(object1 = list(w=1, x=list(y=0.1, z="cat")), object2 = list(w=2, x=list(y=0.2, z="dog"))) unlist(myList) does a great job of recursively flattening the list, and I can then use lapply to flatten all the objects nicely. flatList <- lapply(myList, FUN= function(object) {return(as.data.frame(rbind(unlist(object))))}) And finally, I can button it up using plyr::rbind.fill myDF <- do.call(plyr::rbind.fill, flatList) str(myDF) #'data.frame': 2 obs. of 3 variables: #$ w : Factor w/ 2 levels "1","2": 1 2 #$ x.y: Factor w/ 2 levels "0.1","0.2": 1 2 #$ x.z: Factor w/ 2 levels "cat","dog": 1 2 The problem is that w and x.y are now being interpreted as character vectors, which by default get parsed as factors in the dataframe. I believe that unlist() is the culprit, but I can't figure out another way to recursively flatten the list structure. A workaround would be to post-process the dataframe, and assign data types then. What is the best way to determine if a vector is a valid numeric or integer vector?

    Read the article

  • iPhone AVAudioPlayer failed to find codec

    - by Anthony
    Hello, I am writing an app that downloads a wav file from a server and needs to play that file. The files use the mulaw codec with 2:1 compression. These wav files are dynamically created by a seperate process so there is no way for me to preconvert the files to a different format or codec, I need to be able to play them as is. I am using an AVAudioPlayer instance initialized as follows: NSURL *audioURL = [[NSURL alloc] initWithString:@"http://xxx.../file.wav"]; NSData *audioData = [[NSData alloc] initWithContentsOfURL:audioURL]; AVAudioPlayer *audio = [[AVAudioPlayer alloc] initWithData:audioData error:nil]; [audio play]; However, when the play method executes, I get the following Console Output when executing on the Simulator: AudioQueue codec policy 1: failed to find a codec of the requested type I also tried saving the downloaded data to a local file and using a file URL, however that yeilds the same results. The downloaded file does play fine on both Mac and Windows based desktop media players. The SDK docs state that the mulaw codec is supported on the iPhone, so I am unsure why it is failing to find it. Any assistance would be greatly appreciated. Thanks.

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • How to fix "OutOfMemoryError: java heap space" while compiling MonoDroid App in MonoDevelop

    - by Rodja
    When I try to compile one of my projects, I recently get the following error: Tool /usr/bin/java execution started with arguments: -jar /Applications/android-sdk-mac_x86/platform-tools/lib/dx.jar --no-strict --dex --output=obj/Debug/android/bin/classes.dex obj/Debug/android/bin/classes /Developer/MonoAndroid/usr/lib/mandroid/platforms/android-8/mono.android.jar FlurryAnalytics/Jars/FlurryAgent.jar Jars/android-support-v4.jar UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at com.android.dx.rop.code.RegisterSpecSet.<init>(RegisterSpecSet.java:49) at com.android.dx.rop.code.RegisterSpecSet.mutableCopy(RegisterSpecSet.java:383) at com.android.dx.ssa.LocalVariableInfo.mutableCopyOfStarts(LocalVariableInfo.java:169) at com.android.dx.ssa.LocalVariableExtractor.processBlock(LocalVariableExtractor.java:104) at com.android.dx.ssa.LocalVariableExtractor.doit(LocalVariableExtractor.java:90) at com.android.dx.ssa.LocalVariableExtractor.extract(LocalVariableExtractor.java:56) at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:50) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:99) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:73) at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:273) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:134) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:87) at com.android.dx.command.dexer.Main.processClass(Main.java:487) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:459) at com.android.dx.command.dexer.Main.access$400(Main.java:67) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:398) at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:131) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:109) at com.android.dx.command.dexer.Main.processOne(Main.java:422) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:333) at com.android.dx.command.dexer.Main.run(Main.java:209) at com.android.dx.command.dexer.Main.main(Main.java:174) at com.android.dx.command.Main.main(Main.java:91) Other projects build as expected. I think I need to increase the heap size for this java build step? But how?

    Read the article

  • commons-exec: hanging when I call executor.execute(commandLine);

    - by Stefan Kendall
    I have no idea why this is hanging. I'm trying to capture output from a process run through commons-exec, and I continue to hang. I've provided an example program to demonstrate this behavior below. import java.io.DataInputStream; import java.io.IOException; import java.io.PipedInputStream; import java.io.PipedOutputStream; import org.apache.commons.exec.CommandLine; import org.apache.commons.exec.DefaultExecutor; import org.apache.commons.exec.ExecuteException; import org.apache.commons.exec.PumpStreamHandler; public class test { public static void main(String[] args) { String command = "java"; PipedOutputStream output = new PipedOutputStream(); PumpStreamHandler psh = new PumpStreamHandler(output); CommandLine cl = CommandLine.parse(command); DefaultExecutor exec = new DefaultExecutor(); DataInputStream is = null; try { is = new DataInputStream(new PipedInputStream(output)); exec.setStreamHandler(psh); exec.execute(cl); } catch (ExecuteException ex) { } catch (IOException ex) { } System.out.println("huh?"); } }

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • How do I use IIS6 style metabase paths in IIS7 AppCmd tool?

    - by Mike Atlas
    I'm currently in the process of upgrading old II6 automation scripts that use the IISVdir tool to create/modify/update apps and virtual directories, and replacing them with AppCmd for IIS7. The IIS6, "IISVDir" commands reference paths in that are from the metabase, eg, "/W3SVC/1/ROOT/MyApp" - where 1 is ID of the "Default Web Site" site. The command doesn't actually require the display name of the site to make changes to it. This works well, since on a different language OS, the "Default Web Site" site name could be named, for example, "??? Web ???" or anything else for that matter. But this flexibility is lost if AppCmd can only reference "Default Web Site" via its name, and not a language-neutral identifier. So, how can I script AppCmd to refer to sites, vdirs and apps using language neutral identifiers to reference the "Default App Site"? Perhaps I need to start creating my own site instead, from the start, and name it something else specific, and stop using "Default Web Site" as the root? (Disclosure: I only have a IIS7-English machine that I am working on currently, but I have both IIS6-English and IIS6-Japanese machines for testing my old scripts - so perhaps it really is just "Default Web Site" still on Win2k8-Japanese?)

    Read the article

  • paperclip overwrites / resets S3 permissions for non-bucket-owners

    - by adriandz
    I have opened this as an issue on Github (http://github.com/thoughtbot/paperclip/issues/issue/225) but on the chance that I'm just doing this wrong, I thought I'd also ask about it here. If someone can tell me where I'm going wrong, I can close the issue and save the Paperclip guys some trouble. Issue: When using S3 for storage, and you wish your bucket to allow access to other users to whom you have granted access, Paperclip appears to overwrite the permissions on the bucket, removing access to these users. Process for duplication: Create a bucket in S3 and set up a Rails app with Paperclip to use this bucket for storage Add a user (for example, [email protected], the user for the video encoding service Zencoder) to the bucket, and grant this user List and Read/Write permissions. Upload a file. Refresh the permissions. The user you added will be gone. As well, a user "Everyone" with read permissions will have been added. The end result is that you cannot, so far as I can tell, retain desired permissions on your bucket when using Paperclip and S3. Can anyone help?

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • How to hold payment in paygate for a while?

    - by Fero
    HI all, I have a query regarding holding the payment in PAYGATE PAYMENT GATEWAY. Here is the problem in brief. I am doing a website where the payment should be made only a certain members buy the product. For Example if there is an iPhone in my site, then that particular phone must be buy by certain quantities which given by admin. It may be done one by one user or a single user can buy all the quantities at a single time. In this case as a developer how can i able to hold the payment which received by user? Because i don't want to receive the payments until the certain quantities bought. Because if certain quantities were not buy i need to refund the money to their account. We don't like to do this process. That's why we are looking for holding the payment. Is it possible or what is the best way to solve this problem? Please let me know what is you professional opinion? thanks in advance... Please let me know what is you professional opinion?

    Read the article

  • CSS3 animations - how to make different animations to different elements on scroll

    - by DeanDeey
    I have a question about the doing CSS3 animations on scroll.. I found a code, that shows the DIV element after the user scrolls down: <script type="text/javascript"> $(document).ready(function() { /* Every time the window is scrolled ... */ $(window).scroll( function(){ /* Check the location of each desired element */ $('.hideme').each( function(i){ var bottom_of_object = $(this).position().top + $(this).outerHeight(); var bottom_of_window = $(window).scrollTop() + $(window).height(); /* If the object is completely visible in the window, fade it it */ if( bottom_of_window > bottom_of_object ){ $(this).animate({'opacity':'1'},500); } }); }); }); </script> I'm building a site, which one will have about 5 different boxes, which one will be shown when the user scrolls down to each box. But my question is, how i make the animation inside those boxes. Foe example: in each box there is title, content and images. How do i make all the elements appear after each other, because with this code all the class elements are shown at once. But i would like that if user scrolls down, first the tittle appears, then the content and at the end the images. And then when user scrolls to the next box, the same process repeat's itself. I use some CSS3 delays, but in this case i don't know how long will user take to scroll down. thanks!

    Read the article

  • Passing an Ajax variable to a Codeigniter function

    - by Matt
    Hello, I think this is a simple one. I have a Codeigniter function which takes the inputs from a form and inserts them into a database. I want to Ajaxify the process. At the moment the first line of the function gets the id field from the form - I need to change this to get the id field from the Ajax post (which references a hidden field in the form containing the necessary value) instead. How do I do this please? My Codeigniter Controller function function add() { $product = $this->products_model->get($this->input->post('id')); $insert = array( 'id' => $this->input->post('id'), 'qty' => 1, 'price' => $product->price, 'size' => $product->size, 'name' => $product->name ); $this->cart->insert($insert); redirect('home'); } And the jQuery Ajax function $("#form").submit(function(){ var dataString = $("input#id") //alert (dataString);return false; $.ajax({ type: "POST", url: "/home/add", data: dataString, success: function() { } }); return false; }); As always, many thanks in advance.

    Read the article

  • Why is my PHP string being converted to 1?

    - by animuson
    Ok, so here's the quick rundown. I have a function to generate the page numbers. This: <?php die($ani->e->tmpl->pages("/archive", 1, 15, 1, true)); ?> will output Single Page like expected. But this: <?php $page_numbers = $ani->e->tmpl->pages("/archive", 1, 15, 1, true); ?> <?= $page_numbers ?> will output a simple 1 to the page. Why is it getting converted to a 1? I would expect it to store the 'Single Page' string to the page_numbers variable and then output it (like an echo) exactly the same. EDIT: Running a var_dump($page_numbers) returns int(1)... Here is the entire function in context: <?php // other functions... function show_list() { global $ani; $page_numbers = $ani->e->tmpl->pages("/archive", 1, 15, 1, true); ob_start(); ?> <!-- content:start --> <?php $archive_result = $ani->e->db->build(array("select" => "*", "from" => "animuson_archive", "orderby" => "0-time", "limit" => 15)); while ($archive = $ani->e->db->fetch($archive_result)) { ?> <h2><a href="/archive/article/<?= $archive['aid'] ?>/<?= $archive['title_nice'] ?>"><?= $archive['title'] ?></a></h2> <!-- breaker --> <?php } ?> <?= var_dump($page_numbers) ?> <!-- content:stop --> <?php $ani->e->tmpl->process("box", ob_get_clean()); } // other functions... ?>

    Read the article

  • Setting White balance and Exposure mode for iphone camera + enum default

    - by Spectravideo328
    I am using the back camera of an iphone4 and doing the standard and lengthy process of creating an AVCaptureSession and adding to it an AVCaptureDevice. Before attaching the AvCaptureDeviceInput of that camera to the session, I am testing my understanding of white balance and exposure, so I am trying this: [self.theCaptureDevice lockForConfiguration:nil]; [self.theCaptureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeLocked]; [self.theCaptureDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure]; [self.theCaptureDevice unlockForConfiguration]; 1- Given that the various options for white balance mode are in an enum, I would have thought that the default is always zero since the enum Typedef variable was never assigned a value. I am finding out, if I breakpoint and po the values in the debugger, that the default white balance mode is actually set to 2. Unfortunately, the header files of AVCaptureDevice does not say what the default are for the different camera setting. 2- This might sound silly, but can I assume that once I stop the app, that all settings for whitebalance, exposure mode, will go back to their default. So that if I start another app right after, the camera device is not somehow stuck on those "hardware settings".

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • Django Database design -- Is this a good stragety for overriding defaults

    - by rh0dium
    Hi SO's I have a question on good database design practices and I would like to leverage you guys for pointers. The project started out simple. Hey we have a bunch of questions we want answered for every project (no problem) Which turned into... Hey we have so many questions can we group them into sections (yup we can do that) Which lead into.. Can we weight these questions and I don't really want some of these questions for my project (Yes but we are getting difficult) And then I'm thinking they will want to have each section have it's own weight.. Requirements So there's the requirements - For n number of project Allow a admin member the ability select the questions for a project Allow the admin member to re-weigh or use the default weights for the questions Allow the admin member to re-weight the sections Allow team members to answer the questions. So here is what I came up with. Please feel free to comment and provide better examples models.py from django.db import models from django.contrib.sites.models import Site from django.conf import settings class Section(models.Model): """ This describes the various sections for a checklist: """ name = models.CharField(max_length=64) description = models.TextField() class Question(models.Model): """ This simply provides a simple way to list out the questions. """ question = models.CharField(max_length=255) answer_type = models.CharField(max_length=16) description = models.TextField() section = models.ForeignKey(Section) class ProjectQuestion(models.Model): """ These are the questions relevant to the project """ question = models.ForeignKey(Question) answer = models.CharField(max_length=255) required = models.BooleanField(default=True) weight = models.FloatField(default = XXX) class Project(models.Model): """ Here is where we want to gather our questions """ questions = models.ManyToManyField(ProjectQuestion) Immediate questions: - When I start a project - any ideas on how to "pre-populate" the questions (and ultimately the weights) for the project? - Is there a generally accepted method for doing this process that I am missing? Basically the idea that you refer to the questions overide your own default weight, and store the answer? - It appears that a good chuck of the work will be done in the views and that a lot of checking will need to occur there? Is that OK? Again - feel free to give me better strategies!! Thanks

    Read the article

  • How to push a new feature to a central Mercurial repo?

    - by Sly
    I'm assigned the development of a feature for a project. I'm going to work on that feature for several days over a period of a few weeks. I'll clone the central repo. Then I'm going to work locally for 3 weeks. I'll commit my progress to my repo several times during that process. When I'm done, I'm going to pull/merge/commit before I push. What is the right way push my feature as a single changeset to the central repo? I don't want to push 14 "work in progress" changesets and 1 "merged" changeset to the central repo. I want other collaborators on the project to see only one changeset with a significant commit message (such as "Implemented feature ABC"). I'm new to Mercurial and DVCS so don't hesitate to provide guidance if you think I'm not approaching that the right way. <My own answer> So far I came up with a way of reducing 15 changeset to 2 changeset. Suppose changesets 10 to 24 are "work in progress" changesets. I can 'hg collapse -r 10:24 -m "Implemented feature ABC"' (14 changesets collapsed into 1). Then, I must 'hg pull' + 'hg merge' + 'hg commit -m "Merged with most recent changes"'. But now I'm stuck with 2 changesets. I can no longer 'hg collapse', because pull/merge/commit broke my changeset sequence. Of course 2 changesets is better then 15 but still, I'd rather have 1 changeset. </My own answer>

    Read the article

< Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >