Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 602/882 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Real-time aggregation of files from multiple machines to one

    - by dmitry-kay
    I need a tool which gets a list of machine names and file wildcards. Then it connects to all these machines (SSH) and begins to monitor changes (appendings to the end) in each file matched by wildcards. New lines in each such file are saved to the local machine to the file with the same name. (This is a task of real-time log files collecting.) I could use ssh + tail -f, of course, but it is not very robust: if a monitoring process dies and then restarts, some data from remote files may be lost (because tail -f does not save the position at which it is finished before). I may write this tool manually, but before - I'd like to know if such tool already exists or not.

    Read the article

  • jQuery fadeIn IE Png Issue when loading from external

    - by Adam Stone
    I am loading data from external html files within my domain into a div on my webpage using a load content method in jQuery. I take the div out of the new page whilst hiding the div in the current page by fading this out and fading the new one in. There is a png image in both of these divs and it is creating horrid black blobs in IE, works fine in other browsers but due to IEs inability to process multiple filters its making a mess. I tried using the unit png fix to no avail, does anyone have any fixes or ideas to help keep my pngs looking nice during this transition? i46.tinypic.com/t9dtvr.jpg this is a screenshot of the problem, cheers also discovered that the png that is on the page originaly (before loading anything new) fades in and out perfectly using the unit png fix but stuff loading in and then back out from external files doesnt. Ive added the fix to those pages too but that doesnt work either.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • C# - How can i open a second winform asynchronously but still behave as a child to the parent form?

    - by Gary Willoughby
    I am creating an application and i would like to implement a progress window that appears when a lengthy process is taking place. I've created a standard windows form project to which i've created my app using the default form. I've also created a new form to use as a progress window. The problem arises when i open the progress window (in a function) using: ProgressWindow.ShowDialog(); When this command is encountered, the focus is on the progress window and i assume it's now the window who's mainloop is being processed for events. The downside is it blocks the execution of my lengthy operation in the main form. If i open the progress window using: ProgressWindow.Show(); Then the window opens correctly and now doesn't block the execution of the main form but it doesn't act as a child (modal) window should, i.e. allows the main form to be selected, is not centered on the parent, etc.. Any ideas how i can open a new window but continue processing in the main form?

    Read the article

  • How can I check that the NSPasteboard is updated?

    - by Ben Packard
    I'm automating a copy command to place some text on the pasteboard every second or so - unfortunately this is my only way of accessing the text, which is in another application. After copying, I access the pasteboard text and process it. Sometimes, the copy command will be sent when nothing is selected - for example in textEdit, if the cursor is at the end of a line (instead of highlighting some text) and you hit copy, you get a system beep because there is nothing selected to copy. The pasteboard does not update and retains its previous data. I can't think of a creative way to identify when this happens. If I send a copy command and the pasteboard doesn't update, is there any kind of time stamp on the pasteboard I can access that will confirm that something has or hasn't been captured? I was looking at the changeCount, but I'm not sure what that is for exactly, and the documentation didn't help me much - red herring? Any simple and effective ideas gratefully received!

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • How to get information from objdump

    - by Summer_More_More_Tea
    I encounter a problem when reading information dumped out from an executable file in linux. The information is as follows: 804a0ea: 04 08 add $0x8, %al ... 804a0f4: a6 cmpsb %es:(%edi),%ds:(%esi) I have two questions: what does the address 804a0ea and 804a0f4 mean? the virtual address in the process's address space? what does the ... mean? how can I get instruction at address 804a0f0? Thanks in advance.

    Read the article

  • LINQ to SQL repository - caching data

    - by creativeincode
    I have built my first MVC solution and used the repository pattern for retrieving/inserting/updating my database. I am now in the process of refactoring and I've noticed that a lot of (in fact all) the methods within my repository are hitting the database everytime. This seems overkill and what I'd ideally like is to do is 'cache' the main data object e.g. 'GetAllAdverts' from the database and to then query against this cached object for things like 'FindAdvert(id), AddAdvert(), DeleteAdvert() etc..' I'd also need to consider updating/deleting/adding records to this cache object and the database. What is the best apporoach for something like this? My knowledge of this type of things is minimal and really looking for advice/guidance/tutorial to point me in the right direction. Thanks in advance.

    Read the article

  • how to automate / script processes like signups .

    - by silverkid
    which is the best tool for this - Automation of signup process to a website , e.g an email signup The tool should be able to take data from an external data file like an excel of csv file this data file would contain data such as first name , last name , username, password etc. basic data required during an email signup . I am imagining the data file to contain of each field in a seperate column and each row to contain data for different registration / user. At the places where manual intervention is required like image verification etc. the tool should be able to pause the script until manual bit is done then continue with the script. What is the best way to do this - an automation tool , or any scripting language - please suggest .

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Error during maven build: "[java] Timestamp response not valid"

    - by fei
    My maven build started failing randomly, and it got the following error which I cannot make sense of, and googling it doesn't give me anything useful: [echo] Creating a full package... [java] Timestamp response not valid [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to execute: Executing Ant script: /airtest.build.xml [package-admin-air]: Failed to execute. Java returned: 10 This is a random error that pops up in various point during the build process, and sometimes the build will succeed and then the next one will fail again. This is really weird, does anyone seen this before? I'm using maven 2.2.1 BTW, the error return code 10 in windows mean "Environment is invalid.:

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • How do I watch a file for changes using Python?

    - by Jon Cage
    I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it. What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file. If anyone's done anything like this I'd be really grateful to hear how... [Edit] I should have mentioned that I was after a solution that doesn't require polling. [Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.

    Read the article

  • How to assign permissions for Copy/Paste on windows

    - by jalchr
    Well, as everyone knows there is no way you can assign permissions for Copy/Paste of files on windows platform. I need to control the copy process from a central file server, in a way that helps me know: which user performed the copy Which files were copied where did he pasted them Total size of data copied Time of copy operation If user exceeds the allowed "Copy-Limit", a dialog box requests him to enter administrative credentials or deny him (as it would be configured) Store all this data in a file for later review or send by email. I need to collect this data by putting a utility program on the server itself, without any other installation on client computers. I know about monitoring the Clipboard, but which clipboard would it be? the user's clipboard or the server's clipboard ? And what about drag-drop operation, which doesn't even pass through the clipboard? Any knowledge of whether SystemFileWatcher is useful in such case ? Any ideas ?

    Read the article

  • Import a Collada model doesn't align to pixels

    - by Dan Friedman
    Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.) So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs. Now if I look at the geometry bounding box, the values are floats, not integers. So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity. I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game. Is it possible? Thanks.

    Read the article

  • Regular Expression Help - Brackets within brackets

    - by adbox
    Hello I'm trying to develop a function that can sort through a string that looks like this: Donny went to the {park|store|{beach with friends|beach alone}} so he could get a breath of freash air. What I intend to do is search the text recursively for {} patterns where there is no { or } inside the {}, so only the innermost sandwiched text is selected, where I will then run a php to array the contents and select one at random, repeating process until the whole string has been parsed, showing a complete sentence. I just cannot wrap my head around regular expressions though. Appreciate any help!

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • Any MVVM frameworks work well with NavigationWindows?

    - by Will
    I'd like to "throw away" the current version of a WPF application and move version 2 to a stable MVVM framework. The main concern I'm having is that I don't see much talk about MVVM frameworks and navigation (i.e., NavigationWindows and Frames). My current app relies heavily on Pages to present views to the user. I would prefer to keep it this way. I'd rather not change everything to UserControls and rely on DataTemplates to switch out the view I need to present. Are there any MVVM frameworks that: Work well with NavigationWindows and Pages Provide ViewModels adequate access to the navigation process Provide the ability to change navigation in response to security-related events (e.g., redirect to login Page after logout)

    Read the article

  • Can you decode a mutable Bitmap from an InputStream?

    - by Daniel Lew
    Right now I've got an Android application that: Downloads an image. Does some pre-processing to that image. Displays the image. The dilemma is that I would like this process to use less memory, so that I can afford to download a higher-resolution image. However, when I download the image now, I use BitmapFactory.decodeStream(), which has the unfortunate side effect of returning an immutable Bitmap. As a result, I'm having to create a copy of the Bitmap before I can start operating on it, which means I have to have 2x the size of the Bitmap's memory allocated (at least for a brief period of time; once the copy is complete I can recycle the original). Is there a way to decode an InputStream into a mutable Bitmap?

    Read the article

  • Photo uploading via email

    - by Lee
    I'd like to be able to upload photos via email, which I've seen (and used) on eat.ly and meetups.jquery.com but I haven't been able to work out how to do this, does anyone have a solution? Essentially I believe the process should be something like this: 1) user adds picture to email on mobile device then send to a specific email address, say '[email protected]' 2) email server, cron job or something else looks at the senders address and tells it to add the attachement to that account 3) photo shows up on users profile page I run Apache servers, with MySQL, PHP, and a JQuery framework. I have email servers running Courier, and I missing anything?

    Read the article

  • MVC3 Razor DropDownListFor Enums

    - by jordan.baucke
    Trying to get my project updated to MVC3, something I just can't find: I have a simple datatype of ENUMS: public enum States() { AL,AK,AZ,...WY } Which I want to use as a DropDown/SelectList in my view of a model that contains this datatype: public class FormModel() { public States State {get; set;} } Pretty straight forward: when I go to use the auto-generate view for this partial class, it ignores this type. I need a simple select list that sets the value of the enum as the selected item when I hit submit and process via my AJAX - JSON POST Method. And than the view (???!): <div class="editor-field"> @Html.DropDownListFor(model => model.State, model => model.States) </div> thanks in advance for the advice!

    Read the article

  • Pipeline For Downloading and Processing Files In Unix/Linux Environment With Perl

    - by neversaint
    I have a list of files URLS where I want to download them: http://somedomain.com/foo1.gz http://somedomain.com/foo2.gz http://somedomain.com/foo3.gz What I want to do is the following for each file: Download foo1,2.. in parallel with wget and nohup. Every time it complete download process them with myscript.sh What I have is this: #! /usr/bin/perl @files = glob("foo*.gz"); foreach $file (@files) { my $downurls = "http://somedomain.com/".$file; system("nohup wget $file &"); system("./myscript.sh $file >> output.txt"); } The problem is that I can't tell the above pipeline when does the file finish downloading. So now it myscript.sh doesn't get executed properly. What's the right way to achieve this?

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >