Search Results

Search found 23021 results on 921 pages for 'process monitoring'.

Page 642/921 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • Vibrations when exploding/repacking movie

    - by Stefano Borini
    Please bear with me, I know that what I'm doing can sound strange, but I can guarantee there's a very good reason for that. I took a movie with my camera, as avi. I imported the movie into iMovie and then exploded the single frames as PNG. Then I repacked these frames into mov using the following code movie, error = QTMovie.alloc().initToWritableFile_error_(out_path, None) mt = QTMakeTime(v, scale) attrib = {QTAddImageCodecType: "jpeg"} for path in png_paths: image = NSImage.alloc().initWithContentsOfFile_(path) movie.addImage_forDuration_withAttributes_(image, mt, attrib) movie.updateMovieFile() The resulting mov works, but it looks like the frames are "nervous" and shaky when compared to the original avi, which appears smoother. The size of the two files is approximately the same, and both the export and repacking occurred at 30 fps. The pics also appear to be aligned, so it's not due to accidental shift of the frames. My question is: by knowing the file formats and the process I performed, what is the probable cause of such result ? How can I fix it ?

    Read the article

  • Does MATLAB perform tail call optimization?

    - by Shea Levy
    I've recently learned Haskell, and am trying to carry the pure functional style over to my other code when possible. An important aspect of this is treating all variables as immutable, i.e. constants. In order to do so, many computations that would be implemented using loops in an imperative style have to be performed using recursion, which typically incurs a memory penalty due to the allocation a new stack frame for each function call. In the special case of a tail call (where the return value of a called function is immediately returned to the callee's caller), however, this penalty can be bypassed by a process called tail call optimization (in one method, this can be done by essentially replacing a call with a jmp after setting up the stack properly). Does MATLAB perform TCO by default, or is there a way to tell it to?

    Read the article

  • Architectural decision : QT or Eclipse Platform ?

    - by umanga
    We are in the process of designing a tool to be used with HDEM(High Definition Electron Microscope).We get stacks of 2D images from HDEM and first step is 'detecting borders' on the sections.After detecting edges of 2D slices ,next step is construct the 3D model using these 2D slices. This 'border detecting' algorithm(s) is/are implemented by one of professor and he has used and suggests to use C.(to gain high performance and probably will parallelise in future) We have to develop comprehensive UI ,3D viewer ,2D editor...etc and use this algorithm. Application should support usual features like project save/open.Undo,Redo...etc Our technology decisions are: A) Build entire platform from the scratch using QT. B) Use Eclipse Platform Our concerns are, if we choose A) we can easily integrate the 'border detecting' algorithm(s) because the development environment is C/C++ But we have to implement the basic features from the scratch. If we choose B) we get basic features from the Eclipse platform , but integrating C libraries going to be a tedious task. Any suggestions on this?

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • In search of a packaged .Net security solution for web-forms.

    - by Chuck Conway
    We are looking for a security solution for asp.net that has security down to the control level. This is not a necessity but, it would be nice. At the very least it needs to extend-able to allow for control level permissions. The solution should have an administration panel of some sort. It also needs to support roles, groups, and individual permissions. We haven't seen anything like this in the marketplace -- we are in the process of rolling our own solution. We'd rather use an off the shelf solution.

    Read the article

  • Super user powers in development environment?

    - by red tiger
    Is it too much to ask for when I ask the IT department to give my development team an environment where we can use whatever software that we can download without having to have security check those tools? Of course, the software can be checked by security before deploying to Test, and the development environment can be on a VLAN that is not accessible from outside. This would greatly aid us by allowing us to use whatever open-source testing tools that we want. I'm asking because we have such tight restrictions on the software approval process, and I hear of other teams that have an environment where they can configure their local server however they want and they can use whatever tools they want. What's the norm out there? Thank you for any comments!

    Read the article

  • Missing artifact error in Maven

    - by abhin4v
    I get a missing artifact error during Maven build because one of the dependencies declares it's parent artifact using a property for the version. Now the property itself is declared in the parent pom and my project's build fails giving this error: [ERROR] Failed to execute goal on project abc: Unable to get dependency information for xyz:pqr:jar:SNAPSHOT: Failed to process POM for xyz:pqr:jar:SNAPSHOT: Non-resolvable parent POM xyz:pqr-parent:${someversion} for xyz:pqr:${someversion}: Failed to resolve POM for xyz:pqr-parent:${someversion} due to Missing: ---------- 1) xyz:pqr-parent:pom:${someversion} ---------- 1 required artifact is missing. for artifact: xyz:pqr-parent:pom:${someversion} I have verified that the artifacts are present in correct location in the repository. Is there a way to specify the value of someversion property used in the dependency pom? If not, how should the dependency pom be changed to resolve the error?

    Read the article

  • Taming the malloc/free beast -- tips & tricks

    - by roufamatic
    I've been using C on some projects for a master's degree but have never built production software with it. (.NET & Javascript are my bread and butter.) Obviously, the need to free() memory that you malloc() is critical in C. This is fine, well and good if you can do both in one routine. But as programs grow, and structs deepen, keeping track of what's been malloc'd where and what's appropriate to free gets harder and harder. I've looked around on the interwebs and only found a few generic recommendations for this. What I suspect is that some of you long-time C coders have come up with your own patterns and practices to simplify this process and keep the evil in front of you. So: how do you recommend structuring your C programs to keep dynamic allocations from becoming memory leaks?

    Read the article

  • Different return XML in a WCF Operation

    - by Sean Hederman
    I am writing a service to a international HTTP standard, and there is one method that can return three different XML results, call them Single, Multiple and Error. Now I've written an IXmlSerializable class that can consume each of these results and generate them. However, WCF seems to insist that I can only have a single return XML root name. I have to choose an XmlRoot for my custom object of either Single, Multiple or Error. How can I set up WCF so that I can choose at runtime what the root will be? This is what I have currently. /// <summary> /// A collection of items. /// </summary> [XmlRoot("Multiple", Namespace = "DAV:")] public sealed class ItemCollection : IEnumerable<Item>, IXmlSerializable /// <summary> /// Processes and returns the items. /// </summary> [WebInvoke(Method = "POST", UriTemplate = "{*path}", BodyStyle = WebMessageBodyStyle.Bare)] [OperationContract] [XmlSerializerFormat] ItemCollection Process(string path);

    Read the article

  • Another Java vs. Scala perspective - is this typical?

    - by Alex R
    I have been reading about Scala for a while and even wrote some small programs to better understand some of the more exoteric features. Today I decided to do my first "real project", translating some 60 lines of ugly Java code to Scala to rewrite it using the better pattern-matching features (why? because the Java version was becoming hard to maintain due to excessive combination of regex and conditionals). About halfway through the editing process, Eclipse thew up this error: I get the general impression that the Scala IDE in Eclipse is a lot buggier and less complete than its Java equivalent. Is this correct or do I just have a bad installation? Is there a better IDE for Scala?

    Read the article

  • Get group key from bridge table

    - by Mads Jensen
    I'm developing an ETL process, and need a bridge table for a one-to-many relationship between a fact table and a dimension table (MySQL database). There is a limited number of combinations (some thousands), so I want to re-use group keys from the bridge table to to limit the size. Any group of dimensions belonging to a fact row will consist of a number of dimension keys (1 to around 15), assigned to a unique group key, as below: group_key | dimension_key ----------------------- 1 | 1 1 | 3 1 | 4 2 | 1 2 | 2 2 | 3 3 | 1 3 | 4 How do I go about retrieving the unique group key for the dimensions 1,3,4 (ie. 1).

    Read the article

  • Django: How do I position a page when using Django templates

    - by swisstony
    I have a web page where the user enters some data and then clicks a submit button. I process the data and then use the same Django template to display the original data, the submit button, and the results. When I am using the Django template to display results, I would like the page to be automatically scrolled down to the part of the page where the results begin. This allows the user to scroll back up the page if she wants to change her original data and click submit again. Hopefully, there's some simple way of doing this that I can't see at the moment.

    Read the article

  • How can I Export a Table in Access using VBA into a specific sheet in an Excel spreadsheet?

    - by Bryan
    I have a some tables, we will call them Table1,Table2.... and I need them to be Exported into specific spreadsheets in a macro enabled Excel File (.xlsm) that already exists. So I would need to put Table1 into Sheet2, Table2 into Sheet3... and so on. I had been doing this manually by going to the export menu in Access but it is getting monotonous so I would like to automate the process. The Excel file will already have code in each spreadsheet which would need to still be intact.

    Read the article

  • detect sender of signal (linux, ptrace)

    - by osgx
    Hello Can I distinguish signal, between delivered directly to a process and delivered via debugger. Case 1: $ ./process1 process1 (not ptraced) set up handler alarm(5); .... signal is handled and I can parse handler parameters Case 2: $ debugger1 ./process1 process1 (is ptraced by debugger1) set up handler alarm(5); ... signal is catched by debugger1. It resumes process1 with PTRACE_CONT, signal_number is 4th parameter of PTRACE_CONT. signal is redelivered to process1 it is handled. So, how can I detect in signal handler, was it redelivered by debugger or send by system? OS is Linux, kernel is 2.6.30. Programs are written in plain C.

    Read the article

  • Drupal: Multiple Stylesheets

    - by Vecta
    I'm currently creating a custom Drupal theme for my company and I'm having trouble getting multiple stylesheets to load. I followed the instructions on this page by adding stylesheets to the .info file in the format: stylesheets[all][] = style.css stylesheets[all][] = name2.css etc... However, when I load the page nothing changes and when I view source, it consistently lists style.css but seems to ignore the others. Am I misunderstood in the process of adding additional stylesheets? What could I be doing incorrectly? Thanks for any help!

    Read the article

  • Remove first element from $@ in bash

    - by Herms
    I'm writing a bash script that needs to loop over the arguments passed into the script. However, the first argument shouldn't be looped over, and instead needs to be checked before the loop. If I didn't have to remove that first element I could just do: for item in "$@" ; do #process item done I could modify the loop to check if it's in its first iteration and change the behavior, but that seems way too hackish. There's got to be a simple way to extract the first argument out and then loop over the rest, but I wasn't able to find it.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Python subprocess.Popen

    - by Albert
    I have that code: #!/usr/bin/python -u localport = 9876 import sys, re, os from subprocess import * tun = Popen(["./newtunnel", "22", str(localport)], stdout=PIPE, stderr=STDOUT) print "** Started tunnel, waiting to be ready ..." for l in tun.stdout: sys.stdout.write(l) if re.search("Waiting for connection", l): print "** Ready for SSH !" break The "./newtunnel" will not exit, it will constantly output more and more data to stdout. However, that code will not give any output and just keeps waiting in the tun.stdout. When I kill the newtunnel process externally, it flushes all the data to tun.stdout. So it seems that I can't get any data from the tun.stdout while it is still running. Why is that? How can I get the information? Note that the default bufsize for Popen is 0 (unbuffered). I can also specify bufsize=0 but that doesn't change anything.

    Read the article

  • Issue with maxWorkerThreads and thread count

    - by Kartik M
    I have created an ASP.NET application which creates threads in an infinite loop. I set maxWorkerThreads to 20 in processModel in machine.config. When I checked the Thread count in perfmon there was around 7000 threads created in worker process. In PageLoad() I have: using System.Threading; ... int count = 0; var threadList = new System.Collections.Generic.List<System.Threading.Thread>(); try { while (true) { Thread newThread = new Thread(ThreadStart(DummyCall), 1024); newThread.Start(); threadList.Add(newThread); count++; } } catch (Exception ex) { Response.Write(count + " : " + ex.ToString()); } Function: void DummyCall() { System.Threading.Thread.Sleep(1000000000); } How do I restrict thread creation in ASP.NET with IIS6/7?

    Read the article

  • How to call an external webservice in asp.net

    - by prince23
    hi, i have an webservice written by java application now. i need to use that webservice in my application and send a parameter to it. For example lets say this is the path of the webservice : http://localhost:1838/Ajax/WebService.as now once i click the button in my page[emp.aspx] i need to call the above webservice path 2: what is the use of creating an proxy for an webservice using wsdl tool can anyone tell me the syntax for it? so what is the process i need to follow to consume an webservice written in another applcation or domain looking forward for an solution thank you

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Ruby Metaprogramming

    - by VP
    I'm trying to write a DSL that allows me to do Policy.name do author "Foo" reviewed_by "Bar" end The following code can almost process it: class Policy include Singleton def self.method_missing(name,&block) puts name puts "#{yield}" end def self.author(name) puts name end def self.reviewed_by(name) puts name end end Defining my method as class methods (self.method_name) i can access it using the following syntax: Policy.name do Policy.author "Foo" Policy.reviewed_by "Bar" end If i remove the "self" from the method names, and try to use my desired syntax, then i receive an error "Method not Found" in the Main so it could not find my function until the module Kernel. Its ok, i understand the error. But how can i fix it? How can i fix my class to make it work with my desired syntax that?

    Read the article

  • C#/.NET: Separation of multipage tiff with compression "CCITT T.6" very slow

    - by Alex B
    I need to separate multiframe tiff files, and use the following method: public static Image[] GetFrames(Image sourceImage) { Guid objGuid = sourceImage.FrameDimensionsList[0]; FrameDimension objDimension = new FrameDimension(objGuid); int frameCount = sourceImage.GetFrameCount(objDimension); Image[] images = new Image[frameCount]; for (int i = 0; i < frameCount; i++) { MemoryStream ms = new MemoryStream(); sourceImage.SelectActiveFrame(objDimension, i); sourceImage.Save(ms, ImageFormat.Tiff); images[i] = Image.FromStream(ms); } return images; } It works fine, butt if the source was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU. Saving the images afterwards to a single file using standard compression (LZW), the separation time is under 1 second. Is there a way to speed up the process?

    Read the article

  • How do you go from an abstract project description to actual code?

    - by Jason
    Maybe its because I've been coding around two semesters now, but the major stumbling block that I'm having at this point is converting the professor's project description and requirements to actual code. Since I'm currently in Algorithms 101, I basically do a bottom-up process, starting with a blank whiteboard and draw out the object and method interactions, then translate that into classes and code. But now the prof has tossed interfaces and abstract classes into the mix. Intellectually, I can recognize how they work, but am stubbing my toes figuring out how to use these new tools with the current project (simulating a web server). In my professors own words, mapping the abstract description to Java code is the real trick. So what steps are best used to go from English (or whatever your language is) to computer code? How do you decide where and when to create an interface, or use an abstract class?

    Read the article

  • How do I wait until a console application is idle?

    - by Anthony Mastrean
    I have a console application that starts up, hosts a bunch of services (long-running startup), and then waits for clients to call into it. I have integration tests that start this console application and make "client" calls. How do I wait for the console application to complete its startup before making the client calls? I want to avoid doing Thread.Sleep(int) because that's dependent on the startup time (which may change) and I waste time if the startup is faster. Process.WaitForInputIdle works only on applications with a UI (and I confirmed that it does throw an exception in this case). I'm open to awkward solutions like, have the console application write a temp file when it's ready.

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >