Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 729/799 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • Melting Laptop Power Supply Tip

    - by AlReece45
    Several (6-7) months ago, my laptop power supply cord got a cut in it and stopped working. Having gotten cheap (and short) power supplies in the past, I decided to buy 2 brand new ones from the manufacturer (ASUS). Now, I used my laptop a little less than usual between February and March. During that time I noticed a few times that the power supply, even though plugged in, did not provide power. Often the computer would just off on me. I figured it was just that one power supply being bad. I had left the alternate at my parent's house in another state and asked them to ship it to me. Now, at work the other day I wanted to get a file off the of hard disk. So I booted it up, knowing that it had a low battery, plugged it in. During the first 2 minutes of use, I was told that the battery was low and I should plug it in. I unplugged it, inspected the end (Being plugged in, this was suspicious), and decided I shouldn't plug it back in-- the plastic on the tip was melting from the heat of the metal on the tip. The computer had simply booted up and I had the file-manager open. It had not been on for more than 10 hours. Now I know that computers tend to get pretty hot. However, the melting point of plastic is usually above 200C.. so that's much hotter than the computer should be generating. I went and bought a THIRD power supply. This time a universal one from Best Buy (it was very fast to buy and test). I tried it out on the computer and it's tip is melting as well. My older laptop that uses the universal power supply uses it perfectly (has been about a week and a part of use now). I have tried using the computer without the battery, with the same effect. Obviously, this is not a problem with the power supply. My room mate and I being trained computer techs were contemplating taking the computer apart and desoldering and resoldering on the power tip. (The computer is about 6 months out of its 2-year warranty). We're hoping that will correct the issue as I would prefer to devote my money on a Good Desktop rather than yet ANOTHER $1200+ laptop. Is there any thing I'm missing here that might cause the the tip on the power unit to melt?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Concerning persistence size in the Linux Live Creator

    - by user63085
    Message : Hello everyone! I have ,for the last several months, used the Linux Live USB Creator which it is a very useful app to make portable OS on to flash drives. I mostly use this application to test and try out new OS's as they are released, before I decide to make a hard disk installatio on to the computer. In many cases, the application developers will allow the “persistence” feature in the flash-drive-installed OS, which is just another way of saying that after multiple boot-ups and shutdowns, all the changes made to the OS will be saved in the flash-drive. But I have a question about the limit of the Persistence size in Linux Live USB Creator (currently version 2.6). I install Super OS 10 on to a partition on my external drive which has 30 GB. I wanted to reserve 10 GB for the persistence so that I can install more applications and space will not run out as I update the installed applications or when I do system updates. But why is it that only 3950 MB can be put for persistence? It would be great if, when desired, as much more persistence space could be set aside so that the space will not run out soon. Also, as I have installed the OS on a 30 GB drive, I tried to see how much space is left. But it seems only the remaining of the Persistence space is displayed when I click on the File System folder. For example, after I have just installed it now, there is 3.5 GB of free space. Where can I access the remaining 26 GB or so drive space which is in the same drive? How do I access it Sir?? It would be helpful if any one could explain and help me with this. Most importantly, it would be a big relief if the persistence can be somehow expanded by a work-around so that I can continue using my SuperOS 10.04 (now heavily customized) OS, which unfortunately has just over 576 MB of space left now, after I removed OpenOffice.org and installed the Libre Office earlier today. This is what remains from the maximum allowable 3950 MB of space for persistence at set-up. Thanks in advance!

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Animating the offset of the scrollView in a UICollectionView/UITableView causes prematurely disappearing cells

    - by radutzan
    We have a UICollectionView with a custom layout very similar to UITableView (it scrolls vertically). The UICollectionView displays only 3 cells simultaneously, with one of them being the currently active cell: [ 1 ] [*2*] [ 3 ] (The active cell here is #2.) The cells are roughly 280 points high, so only the active cell is fully visible on the screen. The user doesn't directly scroll the view to navigate, instead, she swipes the active cell horizontally to advance to the next cell. We then do some fancy animations and scroll the UICollectionView so the next cell is in the "active" position, thus making it the active one, moving the old one away and bringing up the next cell in the queue: [ 2 ] [*3*] [ 4 ] The problem here is setting the UICollectionView's offset. We currently set it in a UIView animation block (self.collectionView.contentOffset = targetOffset;) along with three other animating properties, which mostly works great, but causes the first cell (the previously active one, in the latter case, #2) to vanish as soon as the animation starts running, even before the delay interval completes. This is definitely not ideal. I've thought of some solutions, but can't figure out the best one: Absurdly enlarge the UICollectionView's frame to fit five cells instead of three, thus forcing it to keep the cells in memory even if they are offscreen. I've tried this and it works, but it sounds like an awfully dirty hack. Take a snapshot of the rendered content of the vanishing cell, put it in a UIImageView, add the UIImageView as a subview of the scrollView just before the cell goes away in the exact same position of the old cell, removing it once the animation ends. Sounds less sucky than the previous option (memory-wise, at least), but still kinda hacky. I also don't know the best way to accomplish this, please point me in the right direction. Switch to UIScrollView's setContentOffset:animated:. We actually used to have this, and it fixed the disappearing cell issue, but running this in parallel with the other UIView animations apparently competes for the attention of the main thread, thus creating a terribly choppy animation on single-core devices (iPhone 3GS/4). It also doesn't allow us to change the duration or easing of the animation, so it feels out of sync with the rest. Still an option if we can find a way to make it work in harmony with the UIView block animations. Switch to UICollectionView's scrollToItemAtIndexPath:atScrollPosition:animated:. Haven't tried this, but it has a big downside: it only takes 3 possible constants (that apply to this case, at least) for the target scroll position: UICollectionViewScrollPositionTop, UICollectionViewScrollPositionCenteredVertically and UICollectionViewScrollPositionBottom. The active cell could vary its height, but it always has to be 35 points from the top of the window, and these options don't provide enough control to accomplish the design. It could also potentially be just as problematic as 3.1. Still an option because there might be a way to go around the scroll position thing that I don't know of, and it might not have the same issue with the main thread, which seems unlikely. Any help will be greatly appreciated. Please ask if you need clarification. Thanks a lot!

    Read the article

  • Call Webservice without adding a WebReference - with Complex Types

    - by ck
    I'm using the code at This Site to call a webservice dynamically. [SecurityPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } This works fine when I use built in types, but for my own classes, I get this: Event Type: Error Event Source: TDX Queue Service Event Category: None Event ID: 0 Date: 12/04/2010 Time: 12:12:38 User: N/A Computer: TDXRMISDEV01 Description: System.ArgumentException: Object of type 'TDXDataTypes.AgencyOutput' cannot be converted to type 'AgencyOutput'. Server stack trace: at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.MethodBase.CheckArguments(Object[] parameters, Binder binder, BindingFlags invokeAttr, CultureInfo culture, Signature sig) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at TDXQueueEngine.GenericWebserviceProxy.CallWebService(String webServiceAsmxUrl, String serviceName, String methodName, Object[] args) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\GenericWebserviceProxy.cs:line 76 at TDXQueueEngine.TDXQueueWebserviceItem.Run() in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueueWebserviceItem.cs:line 99 at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at TDXQueueEngine.TDXQueue.RunProcess.EndInvoke(IAsyncResult result) at TDXQueueEngine.TDXQueue.processComplete(IAsyncResult ar) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueue.cs:line 130 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The classes reference the same assembly and the same version. Do I need to include my assembly as a reference when building the temporary assembly? If so, how? Thanks.

    Read the article

  • JSF: Cannot catch ViewExpiredException

    - by ifischer
    I'm developing a JSF 2.0 application on Glassfish v3 and i'm trying to handle the ViewExpiredException. But whatever i do, i always get a Glassfish error report instead of my own error page. To simulate the occurrence of the VEE, i inserted the following function into my backing bean, which fires the VEE. I'm triggering this function from my JSF page through a commandLink. The Code: @Named public class PersonHome { (...) public void throwVEE() { throw new ViewExpiredException(); } } At first i tried it by simply adding an error-page to my web.xml: <error-page> <exception-type>javax.faces.application.ViewExpiredException</exception-type> <location>/error.xhtml</location> </error-page> But this doesn't work, i'm not redirected to error but i'm shown the Glassfish errorpage, which shows a HTTP Status 500 page with the following content: description:The server encountered an internal error () that prevented it from fulfilling this request. exception: javax.servlet.ServletException: javax.faces.application.ViewExpiredException root cause: javax.faces.el.EvaluationException:javax.faces.application.ViewExpiredException root cause:javax.faces.application.ViewExpiredException Next thing i tried was to write ExceptionHandlerFactory and a CustomExceptionHandler, as described in JavaServerFaces 2.0 - The Complete Reference. So i inserted the following tag into faces-config.xml: <factory> <exception-handler-factory> exceptions.ExceptionHandlerFactory </exception-handler-factory> </factory> And added these classes: The factory: package exceptions; import javax.faces.context.ExceptionHandler; public class ExceptionHandlerFactory extends javax.faces.context.ExceptionHandlerFactory { private javax.faces.context.ExceptionHandlerFactory parent; public ExceptionHandlerFactory(javax.faces.context.ExceptionHandlerFactory parent) { this.parent = parent; } @Override public ExceptionHandler getExceptionHandler() { ExceptionHandler result = parent.getExceptionHandler(); result = new CustomExceptionHandler(result); return result; } } The custom exception handler: package exceptions; import java.util.Iterator; import javax.faces.FacesException; import javax.faces.application.NavigationHandler; import javax.faces.application.ViewExpiredException; import javax.faces.context.ExceptionHandler; import javax.faces.context.ExceptionHandlerWrapper; import javax.faces.context.FacesContext; import javax.faces.event.ExceptionQueuedEvent; import javax.faces.event.ExceptionQueuedEventContext; class CustomExceptionHandler extends ExceptionHandlerWrapper { private ExceptionHandler parent; public CustomExceptionHandler(ExceptionHandler parent) { this.parent = parent; } @Override public ExceptionHandler getWrapped() { return this.parent; } @Override public void handle() throws FacesException { for (Iterator<ExceptionQueuedEvent> i = getUnhandledExceptionQueuedEvents().iterator(); i.hasNext();) { ExceptionQueuedEvent event = i.next(); System.out.println("Iterating over ExceptionQueuedEvents. Current:" + event.toString()); ExceptionQueuedEventContext context = (ExceptionQueuedEventContext) event.getSource(); Throwable t = context.getException(); if (t instanceof ViewExpiredException) { ViewExpiredException vee = (ViewExpiredException) t; FacesContext fc = FacesContext.getCurrentInstance(); NavigationHandler nav = fc.getApplication().getNavigationHandler(); try { // Push some useful stuff to the flash scope for // use in the page fc.getExternalContext().getFlash().put("expiredViewId", vee.getViewId()); nav.handleNavigation(fc, null, "/login?faces-redirect=true"); fc.renderResponse(); } finally { i.remove(); } } } // At this point, the queue will not contain any ViewExpiredEvents. // Therefore, let the parent handle them. getWrapped().handle(); } } But STILL i'm NOT redirected to my error page - i'm getting the same HTTP 500 error like above. What am i doing wrong, what could be missing in my implementation so that the exception is handled correctly?

    Read the article

  • Can't get heroku work with rails 3.x postgresql

    - by framomo86
    I followed Heroku official guides to push rails project to heroku. The application.rb file is ok, I added pg gem and database.yml in the right way. When I push to heroku I get: -----> Preparing app for Rails asset pipeline Detected manifest.yml, assuming assets were compiled locally But when I open heroku via heroku open I get an error. I put heroku logs and get this. Started GET "/" for 93.45.227.255 at 2012-10-11 13:28:04 +0000 2012-10-11T13:28:04+00:00 app[web.1]: Processing by ProductsController#index as HTML 2012-10-11T13:28:04+00:00 app[web.1]: : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull 2012-10-11T13:28:04+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::Error: ERROR: relation "products" does not exist 2012-10-11T13:28:04+00:00 app[web.1]: LINE 4: WHERE a.attrelid = '"products"'::regclass 2012-10-11T13:28:04+00:00 heroku[router]: GET gift4.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=203ms status=500 bytes=643 2012-10-11T13:28:04+00:00 app[web.1]: ): 2012-10-11T13:28:04+00:00 app[web.1]: ON a.attrelid = d.adrelid AND a.attnum = d.adnum 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: ^ 2012-10-11T13:28:04+00:00 app[web.1]: 2012-10-11T13:28:04+00:00 app[web.1]: Completed 500 Internal Server Error in 72ms 2012-10-11T13:28:04+00:00 app[web.1]: FROM pg_attribute a LEFT JOIN pg_attrdef d 2012-10-11T13:28:04+00:00 app[web.1]: WHERE a.attrelid = '"products"'::regclass 2012-10-11T13:28:04+00:00 app[web.1]: AND a.attnum > 0 AND NOT a.attisdropped 2012-10-11T13:28:04+00:00 app[web.1]: ORDER BY a.attnum 2012-10-11T13:28:04+00:00 app[web.1]: app/controllers/products_controller.rb:5:in `index' 2012-10-11T13:28:04+00:00 heroku[router]: GET gift4.herokuapp.com/favicon.ico d So I tried heroku run rake db:reset And get this Heroku client internal error. ! Search for help at: https://help.heroku.com ! Or report a bug at: https://github.com/heroku/heroku/issues/new Error: Operation timed out - connect(2) (Errno::ETIMEDOUT) Backtrace: /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `initialize' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `open' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:39:in `block in start' /Users/francescochecco/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/timeout.rb:68:in `timeout' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/client/rendezvous.rb:31:in `start' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:125:in `rendezvous_session' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:112:in `run_attached' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command/run.rb:21:in `index' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/command.rb:206:in `run' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/lib/heroku/cli.rb:28:in `start' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/gems/heroku-2.32.6/bin/heroku:16:in `<top (required)>' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/heroku:19:in `load' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/heroku:19:in `<main>' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `eval' /Users/francescochecco/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `<main>' Command: heroku run rake db:reset Version: heroku-gem/2.32.6 (x86_64-darwin11.3.0) ruby/1.9.3 autoupdate I tried everything. Anyone could help?

    Read the article

  • JMS ConnectionFactory creation error WSVR0073W

    - by scottyab
    I must confess I’m not a JMS aficionado, one of our guys has written a Java webservice client [postcode lookup web service] and from a Remote Java client are calling a Message Driven Bean running in Websphere 6.1, using JMS. Getting the following error when attempted to create the Connection Factory. To which configured within Websphere jms/WSProxyQueueConnectionFactory. WARNING: WSVR0073W. Googling WSVR0073W yields little, the error code is an unknown error. Can anyone shed any light on potential issues creating the connection factory. Code Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, contextFactoryName); env.put(Context.PROVIDER_URL, providerURL); env.put("com.ibm.CORBA.ORBInit","com.ibm.ws.sib.client.ORB"); namingContext = new InitialContext(env); System.out.println("callRemoteService: get connectionFactoriy, request/response queues, session. Naming contex env =" + env); // Find everything we need to communicate... connectionFactory = (QueueConnectionFactory) namingContext.lookup(getQueueConnectionFactoryName()); requestQueue = (Queue) namingContext.lookup(getRequestQueueName()); Console output: calling RemoteService with hostname[MyServer:2813] and postcode[M4E 3W1]callRemoteService hostname[MyServer:2813] messess text[M4E 3W1] callRemoteService: get connectionFactoriy, request/response queues, session. Naming contex env ={com.ibm.CORBA.ORBInit=com.ibm.ws.sib.client.ORB, java.naming.provider.url=iiop:// MyServer:2813/, java.naming.factory.initial=com.ibm.websphere.naming.WsnInitialContextFactory} 05-Jan-2011 13:51:04 null null WARNING: WSVR0073W 05-Jan-2011 13:51:05 null null WARNING: jndiGetObjInstErr 05-Jan-2011 13:51:05 null null WARNING: jndiNamingException callRemoteService: closing connections and resources com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object. [Root exception is java.lang.NoClassDefFoundError: Invalid Implementation Key, com.ibm.ws.transaction.NonRecovWSTxManager] at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookupExt(Helpers.java:1000) at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookup(Helpers.java:705) at com.ibm.ws.naming.jndicos.CNContextImpl.processResolveResults(CNContextImpl.java:2097) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1951) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1866) at com.ibm.ws.naming.jndicos.CNContextImpl.lookupExt(CNContextImpl.java:1556) at com.ibm.ws.naming.jndicos.CNContextImpl.lookup(CNContextImpl.java:1358) at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:172) at javax.naming.InitialContext.lookup(InitialContext.java:450) at com.das.jms.clients.BaseWSProxyClient.callRemoteService(BaseWSProxyClient.java:180) at com.das.jms.clients.RemotePostCodeLookup.findAddress(RemotePostCodeLookup.java:38) at com.das.jms.RemoteServiceAccess.findAddress(RemoteServiceAccess.java:80) at com.das.jms.TestRemoteAccess.testSuccessLookup(TestRemoteAccess.java:20) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.OldTestClassRunner.run(OldTestClassRunner.java:76) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object. [Root exception is java.lang.NoClassDefFoundError: Invalid Implementation Key, com.ibm.ws.transaction.NonRecovWSTxManager] [[B@4d794d79 at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.NoClassDefFoundError: Invalid Implementation Key, com.ibm.ws.transaction.NonRecovWSTxManager at com.ibm.ws.Transaction.TransactionManagerFactory.getUOWCurrent(TransactionManagerFactory.java:125) at com.ibm.ws.rsadapter.AdapterUtil.<clinit>(AdapterUtil.java:271) at java.lang.J9VMInternals.initializeImpl(Native Method) at java.lang.J9VMInternals.initialize(J9VMInternals.java:200) at com.ibm.ejs.j2c.ConnectionFactoryBuilderImpl.getObjectInstance(ConnectionFactoryBuilderImpl.java:281) at javax.naming.spi.NamingManager.getObjectInstanceByFactoryInReference(NamingManager.java:480) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:345) at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookupExt(Helpers.java:896) ... 31 more

    Read the article

  • HTTP error code 405: tomcat Url mapping issue

    - by Andrew
    I am having trouble POSTing to my java HTTPServlet. I am getting "HTTP Status 405 - HTTP method GET is not supported by this URL" from my tomcat server". When I debug the servlet the login method is never called. I think it's a url mapping issue within tomcat... web.xml <servlet-mapping> <servlet-name>faxcom</servlet-name> <url-pattern>/faxcom/*</url-pattern> </servlet-mapping> FaxcomService.java @Path("/rest") public class FaxcomService extends HttpServlet{ private FAXCOM_x0020_ServiceLocator service; private FAXCOM_x0020_ServiceSoap port; @GET @Produces("application/json") public String testGet() { return "{ \"got here\":true }"; } @POST @Path("/login") @Consumes("application/json") // @Produces("application/json") public Response login(LoginBean login) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(10); try { service = new FAXCOM_x0020_ServiceLocator(); service.setFAXCOM_x0020_ServiceSoapEndpointAddress("http://cd-faxserver/faxcom_ws/faxcomservice.asmx"); service.setMaintainSession(true); // enable sessions support port = service.getFAXCOM_x0020_ServiceSoap(); rm.add(new ResultMessageBean(port.logOn( "\\\\CD-Faxserver\\FaxcomQ_API", /* path to the queue */ login.getUserName(), /* username */ login.getPassword(), /* password */ login.getUserType() /* 2 = user conf user */ ))); } catch (RemoteException e) { e.printStackTrace(); } catch (ServiceException e) { e.printStackTrace(); } // return rm; return Response.status(201).entity(rm).build(); } @POST @Path("/newFaxMessage") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public ArrayList<ResultMessageBean> newFaxMessage(FaxBean fax) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(); try { rm.add(new ResultMessageBean(port.newFaxMessage( fax.getPriority(), /* priority: 0 - low, 1 - normal, 2 - high, 3 - urgent */ fax.getSendTime(), /* send time */ /* "0.0" - immediate */ /* "1.0" - offpeak */ /* "9/14/2007 5:12:11 PM" - to set specific time */ fax.getResolution(), /* resolution: 0 - low res, 1 - high res */ fax.getSubject(), /* subject */ fax.getCoverpage(), /* cover page: "" – default, “(none)� – no cover page */ fax.getMemo(), /* memo */ fax.getSenderName(), /* sender's name */ fax.getSenderFaxNumber(), /* sender's fax */ fax.getRecipients().get(0).getName(), /* recipient's name */ fax.getRecipients().get(0).getCompany(), /* recipient's company */ fax.getRecipients().get(0).getFaxNumber(), /* destination fax number */ fax.getRecipients().get(0).getVoiceNumber(), /* recipient's phone number */ fax.getRecipients().get(0).getAccountNumber() /* recipient's account number */ ))); if (fax.getRecipients().size() > 1) { for (int i = 1; i < fax.getRecipients().size(); i++) rm.addAll(addRecipient(fax.getRecipients().get(i))); } } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } return rm; } } Main.java private static void main(String[] args) { try { URL url = new URL("https://andrew-vm/faxcom/rest/login"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Content-Type", "application/json"); FileInputStream jsonDemo = new FileInputStream("login.txt"); OutputStream os = (OutputStream) conn.getOutputStream(); os.write(IOUtils.toByteArray(jsonDemo)); os.flush(); if (conn.getResponseCode() != 200) { throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode()); } BufferedReader br = new BufferedReader(new InputStreamReader( (conn.getInputStream()))); String output; System.out.println("Output from Server .... \n"); while ((output = br.readLine()) != null) { System.out.println(output); } // Don't want to disconnect - servletInstance will be destroyed // conn.disconnect(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } I am working from this tutorial: http://www.mkyong.com/webservices/jax-rs/restfull-java-client-with-java-net-url/

    Read the article

  • My VARCHAR(MAX) field is capping itself at 4000; what gives?

    - by eidylon
    Hello all... I have a table in one of my databases which is a queue of emails. Emails to certain addresses get accumulated into one email, which is done by a sproc. In the sproc, I have a table variable which I use to build the accumulated bodies of the emails, and then loop through to send each email. In my table var I have my body column defined as VARCHAR(MAX), seeing as there could be any number of emails currently accumulated for a given email address. It seems though that even though my column is defined as VARCHAR(MAX) it is behaving as if it were VARCHAR(4000) and is truncating the data going into it, although it does NOT throw any exceptions, it just silently stops concatenating any more data after 4000 characters. The MERGE statement is where it is building the accumulated email body into @EMAILS.BODY, which is the field that is truncating itself at 4000 characters. Below is the code of my sproc... ALTER PROCEDURE [system].[SendAccumulatedEmails] AS BEGIN SET NOCOUNT ON; DECLARE @SENTS BIGINT = 0; DECLARE @ROWS TABLE ( ROWID ROWID, DATED DATETIME, ADDRESS NAME, SUBJECT VARCHAR(1000), BODY VARCHAR(MAX) ) INSERT INTO @ROWS SELECT ROWID, DATED, ADDRESS, SUBJECT, BODY FROM system.EMAILQUEUE WHERE ACCUMULATE = 1 AND SENT IS NULL ORDER BY ADDRESS, DATED DECLARE @EMAILS TABLE ( ADDRESS NAME, ALLIDS VARCHAR(1000), BODY VARCHAR(MAX) ) DECLARE @PRVRID ROWID = NULL, @CURRID ROWID = NULL SELECT @CURRID = MIN(ROWID) FROM @ROWS WHILE @CURRID IS NOT NULL BEGIN MERGE @EMAILS AS DST USING (SELECT * FROM @ROWS WHERE ROWID = @CURRID) AS SRC ON SRC.ADDRESS = DST.ADDRESS WHEN MATCHED THEN UPDATE SET DST.ALLIDS = DST.ALLIDS + ', ' + CONVERT(VARCHAR,ROWID), DST.BODY = DST.BODY + '<i>'+CONVERT(VARCHAR,SRC.DATED,101)+' ' +CONVERT(VARCHAR,SRC.DATED,8) +':</i> <b>'+SRC.SUBJECT+'</b>'+CHAR(13)+SRC.BODY +' (Message ID '+CONVERT(VARCHAR,SRC.ROWID)+')' +CHAR(13)+CHAR(13) WHEN NOT MATCHED BY TARGET THEN INSERT (ADDRESS, ALLIDS, BODY) VALUES ( SRC.ADDRESS, CONVERT(VARCHAR,ROWID), '<i>'+CONVERT(VARCHAR,SRC.DATED,101)+' ' +CONVERT(VARCHAR,SRC.DATED,8)+':</i> <b>' +SRC.SUBJECT+'</b>'+CHAR(13)+SRC.BODY +' (Message ID '+CONVERT(VARCHAR,SRC.ROWID)+')' +CHAR(13)+CHAR(13)); SELECT @PRVRID = @CURRID, @CURRID = NULL SELECT @CURRID = MIN(ROWID) FROM @ROWS WHERE ROWID > @PRVRID END DECLARE @MAILFROM VARCHAR(100) = system.getOption('MAILFROM'), DECLARE @SMTPHST VARCHAR(100) = system.getOption('SMTPSERVER'), DECLARE @SMTPUSR VARCHAR(100) = system.getOption('SMTPUSER'), DECLARE @SMTPPWD VARCHAR(100) = system.getOption('SMTPPASS') DECLARE @ADDRESS NAME, @BODY VARCHAR(MAX), @ADDL VARCHAR(MAX) DECLARE @SUBJECT VARCHAR(1000) = 'Accumulated Emails from LIJSL' DECLARE @PRVID NAME = NULL, @CURID NAME = NULL SELECT @CURID = MIN(ADDRESS) FROM @EMAILS WHILE @CURID IS NOT NULL BEGIN SELECT @ADDRESS = ADDRESS, @BODY = BODY FROM @EMAILS WHERE ADDRESS = @CURID SELECT @BODY = @BODY + 'This is an automated message sent from an unmonitored mailbox.'+CHAR(13)+'Do not reply to this message; your message will not be read.' SELECT @BODY = '<style type="text/css"> * {font-family: Tahoma, Arial, Verdana;} p {margin-top: 10px; padding-top: 10px; border-top: single 1px dimgray;} p:first-child {margin-top: 10px; padding-top: 0px; border-top: none 0px transparent;} </style>' + @BODY exec system.LogIt @SUBJECT, @BODY BEGIN TRY exec system.SendMail @SMTPHST, @SMTPUSR, @SMTPPWD, @MAILFROM, @ADDRESS, NULL, NULL, @SUBJECT, @BODY, 1 END TRY BEGIN CATCH DECLARE @EMSG NVARCHAR(2048) = 'system.EMAILQUEUE.AI:'+ERROR_MESSAGE() SELECT @ADDL = 'TO:'+@ADDRESS+CHAR(13)+'SUBJECT:'+@SUBJECT+CHAR(13)+'BODY:'+@BODY exec system.LogIt @EMSG,@ADDL END CATCH SELECT @PRVID = @CURID, @CURID = NULL SELECT @CURID = MIN(ADDRESS) FROM @EMAILS WHERE ADDRESS > @PRVID END UPDATE system.EMAILQUEUE SET SENT = getdate() FROM system.EMAILQUEUE E, @ROWS R WHERE E.ROWID = R.ROWID END

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

  • 1>main.obj : error LNK2001: unresolved external symbol _D3D10CreateDeviceAndSwapChain@32

    - by numerical25
    having trouble getting my directx going I get the following error 1>Linking... 1>main.obj : error LNK2001: unresolved external symbol _D3D10CreateDeviceAndSwapChain@32 1>C:\Users\numerical25\Desktop\Intro ToDirectX\msdnTutorials\tutorial0\tutorial\Debug\tutorial.exe : fatal error LNK1120: 1 unresolved externals below is my code // include the basic windows header file #include <windows.h> #include <windowsx.h> #include <d3d10.h> ID3D10Device* g_pd3dDevice; IDXGISwapChain* g_pSwapChain; // the WindowProc function prototype LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); bool InitDirect3D(HWND); // the entry point for any Windows program int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // the handle for the window, filled by a function HWND hWnd; // this struct holds information for the window class WNDCLASSEX wc; // clear out the window class for use ZeroMemory(&wc, sizeof(WNDCLASSEX)); // fill in the struct with the needed information wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = L"WindowClass1"; // register the window class RegisterClassEx(&wc); // create the window and use the result as the handle hWnd = CreateWindowEx(NULL, L"WindowClass1", // name of the window class L"Our First Windowed Program", // title of the window WS_OVERLAPPEDWINDOW, // window style 300, // x-position of the window 300, // y-position of the window 640, // width of the window 480, // height of the window NULL, // we have no parent window, NULL NULL, // we aren't using menus, NULL hInstance, // application handle NULL); // used with multiple windows, NULL // display the window on the screen ShowWindow(hWnd, nCmdShow); // enter the main loop: // this struct holds Windows event messages MSG msg; bool finished = InitDirect3D(hWnd); // Enter the infinite message loop while(TRUE) { // Check to see if any messages are waiting in the queue while(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // Translate the message and dispatch it to WindowProc() TranslateMessage(&msg); DispatchMessage(&msg); } // If the message is WM_QUIT, exit the while loop if(msg.message == WM_QUIT) break; // Run game code here // ... // ... }; } // this is the main message handler for the program LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { // sort through and find what code to run for the message given switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; } // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } bool InitDirect3D(HWND g_hWnd) { DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof(sd) ); sd.BufferCount = 1; sd.BufferDesc.Width = 640; sd.BufferDesc.Height = 480; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.OutputWindow = g_hWnd; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = TRUE; if( FAILED( D3D10CreateDeviceAndSwapChain( NULL, D3D10_DRIVER_TYPE_REFERENCE, NULL, 0, D3D10_SDK_VERSION, &sd, &g_pSwapChain, &g_pd3dDevice ) ) ) { return FALSE; } return TRUE; }

    Read the article

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • Win32 -- Object cleanup and global variables

    - by KaiserJohaan
    Hello, I've got a question about global variables and object cleanup in c++. For example, look at the code here; case WM_PAINT: paintText(&hWnd); break; void paintText(HWND* hWnd) { PAINTSTRUCT ps; HBRUSH hbruzh = CreateSolidBrush(RGB(0,0,0)); HDC hdz = BeginPaint(*hWnd,&ps); char s1[] = "Name"; char s2[] = "IP"; SelectBrush(hdz,hbruzh); SelectFont(hdz,hFont); SetBkMode(hdz,TRANSPARENT); TextOut(hdz,3,23,s1,sizeof(s1)); TextOut(hdz,10,53,s2,sizeof(s2)); EndPaint(*hWnd,&ps); DeleteObject(hdz); DeleteObject(hbruzh); // bad? DeleteObject(ps); // bad? } 1)First of all; which objects are good to delete and which ones are NOT good to delete and why? Not 100% sure of this. 2)Since WM_PAINT is called everytime the window is redrawn, would it be better to simply store ps, hdz and hbruzh as global variables instead of re-initializing them everytime? The downside I guess would be tons of global variables in the end _ but performance-wise would it not be less CPU-consuming? I know it won't matter prolly but I'm just aiming for minimalistic as possible for educational purposes. 3) What about libraries that are loaded in? For example: // // Main // int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // initialize vars HWND hWnd; WNDCLASSEX wc; HINSTANCE hlib = LoadLibrary("Riched20.dll"); ThishInstance = hInstance; ZeroMemory(&wc,sizeof(wc)); // set WNDCLASSEX props wc.cbSize = sizeof(WNDCLASSEX); wc.lpfnWndProc = WindowProc; wc.hInstance = ThishInstance; wc.hIcon = LoadIcon(hInstance,MAKEINTRESOURCE(IDI_MYICON)); wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = TEXT("PimpClient"); RegisterClassEx(&wc); // create main window and display it hWnd = CreateWindowEx(NULL, wc.lpszClassName, TEXT("PimpClient"), 0, 300, 200, 450, 395, NULL, NULL, hInstance, NULL); createWindows(&hWnd); ShowWindow(hWnd,nCmdShow); // loop message queue MSG msg; while (GetMessage(&msg, NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } // cleanup? FreeLibrary(hlib); return msg.wParam; } 3cont) is there a reason to FreeLibrary at the end? I mean when the process terminates all resources are freed anyway? And since the library is used to paint text throughout the program, why would I want to free before that? Cheers

    Read the article

  • Help with infrequent segmentation fault in accessing boost::unordered_multimap or struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem. Update I added a new assert and a check just inside the for loop: assert( carriage.size() > 0 ); assert( carriage.count( s ) > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; cout << "carriage.count(" << s << ")=" << carriage.count(s) << endl; for ( it = ret.first; it != ret.second; it++ ) { cout << "(*it).first=" << (*it).first << endl; // error here if ( ((*it).second).recT == recoverTime ) { carriage.erase( it ); } } The EXC_BAD_ACCESS error now appears at the (*it).first call, again after many thousands of successful recoveries. Can anyone give me tips on how to figure out how this problem arises? I'm trying to use gdb. Frame 0 from the backtrace reads "#0 0x0000000100001d50 in Host::recover (this=0x100530d80, s=0, recoverTime=635.91148029170529) at Host.cpp:317" I'm not sure what useful information I can extract here. Update 2 I added a break; after the carriage.erase(it). This works, but I have no idea why (e.g., why it would remove the seg fault at (*it).first.

    Read the article

  • Windows Form hangs when running threads

    - by Benjamin Ortuzar
    JI have written a .NET C# Windows Form app in Visual Studio 2008 that uses a Semaphore to run multiple jobs as threads when the Start button is pressed. It’s experiencing an issue where the Form goes into a comma after being run for 40 minutes or more. The log files indicate that the current jobs complete, it picks a new job from the list, and there it hangs. I have noticed that the Windows Form becomes unresponsive when this happens. The form is running in its own thread. This is a sample of the code I am using: protected void ProcessJobsWithStatus (Status status) { int maxJobThreads = Convert.ToInt32(ConfigurationManager.AppSettings["MaxJobThreads"]); Semaphore semaphore = new Semaphore(maxJobThreads, maxJobThreads); // Available=3; Capacity=3 int threadTimeOut = Convert.ToInt32(ConfigurationManager.AppSettings["ThreadSemaphoreWait"]);//in Seconds //gets a list of jobs from a DB Query. List<Job> jobList = jobQueue.GetJobsWithStatus(status); //we need to create a list of threads to check if they all have stopped. List<Thread> threadList = new List<Thread>(); if (jobList.Count > 0) { foreach (Job job in jobList) { logger.DebugFormat("Waiting green light for JobId: [{0}]", job.JobId.ToString()); if (!semaphore.WaitOne(threadTimeOut * 1000)) { logger.ErrorFormat("Semaphore Timeout. A thread did NOT complete in time[{0} seconds]. JobId: [{1}] will start", threadTimeOut, job.JobId.ToString()); } logger.DebugFormat("Acquired green light for JobId: [{0}]", job.JobId.ToString()); // Only N threads can get here at once job.semaphore = semaphore; ThreadStart threadStart = new ThreadStart(job.Process); Thread thread = new Thread(threadStart); thread.Name = job.JobId.ToString(); threadList.Add(thread); thread.Start(); } logger.Info("Waiting for all threads to complete"); //check that all threads have completed. foreach (Thread thread in threadList) { logger.DebugFormat("About to join thread(jobId): {0}", thread.Name); if (!thread.Join(threadTimeOut * 1000)) { logger.ErrorFormat("Thread did NOT complete in time[{0} seconds]. JobId: [{1}]", threadTimeOut, thread.Name); } else { logger.DebugFormat("Thread did complete in time. JobId: [{0}]", thread.Name); } } } logger.InfoFormat("Finished Processing Jobs in Queue with status [{0}]...", status); } //form methods private void button1_Click(object sender, EventArgs e) { buttonStop.Enabled = true; buttonStart.Enabled = false; ThreadStart threadStart = new ThreadStart(DoWork); workerThread = new Thread(threadStart); serviceStarted = true; workerThread.Start(); } private void DoWork() { EmailAlert emailAlert = new EmailAlert (); // start an endless loop; loop will abort only when "serviceStarted" flag = false while (serviceStarted) { emailAlert.ProcessJobsWithStatus(0); // yield if (serviceStarted) { Thread.Sleep(new TimeSpan(0, 0, 1)); } } // time to end the thread Thread.CurrentThread.Abort(); } //job.process() public void Process() { try { //sets the status, DateTimeStarted, and the processId this.UpdateStatus(Status.InProgress); //do something logger.Debug("Updating Status to [Completed]"); //hits, status,DateFinished this.UpdateStatus(Status.Completed); } catch (Exception e) { logger.Error("Exception: " + e.Message); this.UpdateStatus(Status.Error); } finally { logger.Debug("Relasing semaphore"); semaphore.Release(); } I have tried to log what I can into a file to detect where the problem is happening, but so far I haven't been able to identify where this happens. Losing control of the Windows Form makes me think that this has nothing to do with processing the jobs. Any ideas?

    Read the article

  • Dynamically loading modules in Python (+ multi processing question)

    - by morpheous
    I am writing a Python package which reads the list of modules (along with ancillary data) from a configuration file. I then want to iterate through each of the dynamically loaded modules and invoke a do_work() function in it which will spawn a new process, so that the code runs ASYNCHRONOUSLY in a separate process. At the moment, I am importing the list of all known modules at the beginning of my main script - this is a nasty hack I feel, and is not very flexible, as well as being a maintenance pain. This is the function that spawns the processes. I will like to modify it to dynamically load the module when it is encountered. The key in the dictionary is the name of the module containing the code: def do_work(work_info): for (worker, dataset) in work_info.items(): #import the module defined by variable worker here... # [Edit] NOT using threads anymore, want to spawn processes asynchronously here... #t = threading.Thread(target=worker.do_work, args=[dataset]) # I'll NOT dameonize since spawned children need to clean up on shutdown # Since the threads will be holding resources #t.daemon = True #t.start() Question 1 When I call the function in my script (as written above), I get the following error: AttributeError: 'str' object has no attribute 'do_work' Which makes sense, since the dictionary key is a string (name of the module to be imported). When I add the statement: import worker before spawning the thread, I get the error: ImportError: No module named worker This is strange, since the variable name rather than the value it holds are being used - when I print the variable, I get the value (as I expect) whats going on? Question 2 As I mentioned in the comments section, I realize that the do_work() function written in the spawned children needs to cleanup after itself. My understanding is to write a clean_up function that is called when do_work() has completed successfully, or an unhandled exception is caught - is there anything more I need to do to ensure resources don't leak or leave the OS in an unstable state? Question 3 If I comment out the t.daemon flag statement, will the code stil run ASYNCHRONOUSLY?. The work carried out by the spawned children are pretty intensive, and I don't want to have to be waiting for one child to finish before spawning another child. BTW, I am aware that threading in Python is in reality, a kind of time sharing/slicing - thats ok Lastly is there a better (more Pythonic) way of doing what I'm trying to do? [Edit] After reading a little more about Pythons GIL and the threading (ahem - hack) in Python, I think its best to use separate processes instead (at least IIUC, the script can take advantage of multiple processes if they are available), so I will be spawning new processes instead of threads. I have some sample code for spawning processes, but it is a bit trivial (using lambad functions). I would like to know how to expand it, so that it can deal with running functions in a loaded module (like I am doing above). This is a snippet of what I have: def do_mp_bench(): q = mp.Queue() # Not only thread safe, but "process safe" p1 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p2 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p1.start() p2.start() r1 = q.get() r2 = q.get() return r1 + r2 How may I modify this to process a dictionary of modules and run a do_work() function in each loaded module in a new process?

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >