Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 537/903 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Where is php executable on Ubuntu?

    - by user601L
    I have installed apache and php. I know php works as I have tested a simple php file on apache server. I'm writing a simple webserver which should be able to process php files. So what I want to do is once I get a request for a php file, something like 'exec php test.php' and get the output and pass it to the client. As I'm not much into Ubuntu, I don't know where is the php executable (should be in \bin right?) to do it. But there is no php file inside \bin or \usr\bin. When I run 'which php' it shows nothing. How do I do this?

    Read the article

  • Lighttpd with FastCGI won't create /tmp/fcgi.sock on startup?

    - by Marlon
    I'm running lighttpd-1.4.19 on a debian 5 box and try to run web2py with fastcgi. The problem with that is, that lighttpd does not create the socket file /tmp/fcgi.sock. If I'm creating the file by myself touch /tmp/fcgi.sock lighttpd will start but will throw this error after some time running: unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: unix:/tmp/fcgi.sock My config looks like this: fastcgi.server = ( "/handler_web2py.fcgi" = ( "handler_web2py" = ( #name for logs "check-local" = "disable", "socket" = "/tmp/fcgi.sock", "idle-timeout" = 20, "max-procs" = 1 ) ), ) Is there any known problem with running lighttpd on debian 5? Thanks for any help. I have pasted the whole lighttpd config: http://pastie.org/1660646

    Read the article

  • Generating documents with templating from a form

    - by Anna
    Hello, I would like to create a document generator with templating. The workflow should be as following: The user input data to a static form (simple text input). The user chooses a graphically designed template. A document with the chosen template containing the user data is generated. The initial templates repository is prepared in advance, but it should be easy to add new templates to the process. I have the full MS Office suite and the preferred file format is an MS .doc. I can do a little VB scripting if needed, but I prefer not to. Any advice would be greatly appreciated. Thank you, Anna

    Read the article

  • How to synchronize two folders on two remote Linux virtual machines

    - by Manoj Agarwal
    I have two virtual machines, Host OS is ESXi 3.5 and guest OS is Centos 4.6. There are two ESXi servers remotely located, each containing a Centos 4.6 virtual machine. I wish, whatever change I make in any file/folder in one virtual machine should be automatically synchronized on other remote virtual machine. The synchronization process should be automatic. It should only sync differentials, not simulate entire copy with overwrite operation. Sync should be intelligent enough to look for what has changed and what not, and should only update the changed files/folders. Further, there should be some sort of overview and selection for syncing, for example, if it shows 4 files have changed, It should be possible to sync only two files and leave other two for the time being. So, some intelligent syncing mechanism for Linux is needed.

    Read the article

  • Application Launcher for Hyper-V Server

    - by peterchen
    We are currently in the process of setting up a HyperV R2 Server machine. Though there's not a lot we need to do wihtin the HyperV Server itself, the command line is sure minimalistic. There are a few administrative / Hardware Monitoring tools that we want to run on he machine itself (accessed through remote desktop). I am looking for a simple program/application launcher where we can hook up these maintenance tools (and one to open a new cmd.exe window in case I habitually close the one I'm working in!) However, all tools I tried by now more or less assume explorer is present, and fail in different ways. Before I go and write a simple one myself, any recommendations?

    Read the article

  • How can I install mod_dav_svn 1.6 on CentOS 5.4?

    - by Vincenzo
    I'm trying to install mod_dav_svn on CentOS 5.4, and this is what I see: # yum --enablerepo=rpmforge install mod_dav_svn Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.adams.net * base: mirror.sanctuaryhost.com * extras: mirror.sanctuaryhost.com * rpmforge: fr2.rpmfind.net * updates: mirror.steadfast.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mod_dav_svn.x86_64 0:1.4.2-4.el5_3.1 set to be updated --> Processing Dependency: subversion = 1.4.2-4.el5_3.1 for package: mod_dav_svn --> Running transaction check ---> Package subversion.i386 0:1.4.2-4.el5_3.1 set to be updated --> Finished Dependency Resolution [...] Version 1.4.2 is older than my installed Subversion 1.6.9 (I installed it before). How and where can I get mod_dav_svn in version 1.6.9?

    Read the article

  • sound clipping in windows 7 64bit

    - by Sonic Soul
    something is up with 64 bit version of windows 7. i have 2 (good pro-sumer) sound cards which i've used in other version of windows 7 w/out problems, but now it is causing severe clipping (noise, sound having little gaps surrounded by static).. the cards i am using is echo mia, and native instruments audio kontrol 1. the audio kontrol 1 is a external card, and would work ok for a few hours after rebooting, and than it would go back into clipping mode, to the point where you tube videos would not play from trying to process sound and it stopping for extended periods of time. echo mia is performing better, but there is still some clipping and distortion. the machine i use is newly built, with i7 920 64 bit cpu, 6 gigs of ram and an outdated nvidia video card (geforece 7950 gx2)

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • Access Control Service: Home Realm Discovery (HRD) Gotcha

    - by Your DisplayName here!
    I really like ACS2. One feature that is very useful is home realm discovery. ACS provides a Nascar style list as well as discovery based on email addresses. You can take control of the home realm selection process yourself by downloading the JSON feed or by manually setting the home realm parameter. Plenty of options – the only option missing is turning it off… In other words, when you setup your ACS namespace and realm and register identity provider, there is no way to keep the list of identity providers secret. An interested “user” can always retrieve all registered identity provider (using the browser or download the JSON feed). This may not be an issue with web identity providers, but when you use ACS to federate with customers or business partners, you maybe don’t want to disclose that list to the public (or to other customers). This is an adoption blocker for certain situations. I hope this feature will be added soon. In addition I would also like to see a feature I call “home realm aliases”. Some random string that I can use as a whr parameter instead of using the real issuer URI.

    Read the article

  • How do I enable a disabled Event Notification.

    - by Derick Mayberry
    I have a scenerio where I am using external notification to process documents being sent in from the entire navy fleet, normally I have no problems, but just a few days ago an administrator changed passwords and I my queue processing failed and I rolled back the transaction with this C# code: catch (Exception) { TransporterService.WriteEventToWindowsLog(AppName, "Rolling Back Transaction:", ERROR); broker.Tran.Rollback(); break; } after which my target queue would continue to fill up but nothing to the external activation queue. Does the Event Notification get disabled once a transaction is rolled back? Should I have done a broker.EndDialog here when catching my exception? Also, after my event notification is disabled(if that is actually whats happening) how do I re engage it? Do I have to drop it and recreate it? Thank in advance for any help, I love Service Broker and its workign wonderfully except for this bug that I hope to fix soon.

    Read the article

  • local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page

    - by jeff
    local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page. I got all local files from host to www dir under local wamp. I got database from host and loaded to new local db and used this tool to adjust site_on_web.com to "localhost/site_on_local" now the home page works great and can login to admin page but when click on reservations page and others of site then site just goes to the wamp server config page even though the url shows correctly as localhost/site_on_local/reservations my htaccess file is this # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress and rewrite-module is checked in the php-apache-apache modules setting. now when I uncheck the rewrite-module is checked in the php-apache-apache modules setting or I clear out the whole htaccess file then the pages just goto Not Found The requested URL /ritas041214/about-ritas/ was not found on this server. Please help as I am unsure now about my process to move local site up and down and be able to make it work and without this I am lost...

    Read the article

  • Wireless DHCP doesn't work until wired Ethernet plugged in

    - by MT_Head
    A client of mine has an Asus R1F tablet running Windows XP Tablet SP3. It has an Intel 3945ABG wireless card; wired Ethernet is a Realtek something-or-other. In the past few days, it's developed an odd problem: WiFi authenticates, but can't get an address via DHCP. plug in wired Ethernet - both interfaces get good addresses unplug cable, WiFi continues to work until shutdown. Next morning, repeat process. I've tried: turning WiFi off/on (there's a slider switch) disabling/re-enabling via Device Mangler uninstalling and reinstalling the driver for the 3945ABG... changing from Intel Pro/SET to Windows Wireless Zero Config (and back) restarting the router changing the static DHCP assignments at the router upgrading the router firmware, just on general principles The router/access point is pfSense 1.2.3RC1 (was 1.2.2); wireless card is Atheros-based. None of the 12 other users (5 with tablets) are having problems.

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • Zoom Layer centered on a Sprite

    - by clops
    I am in process of developing a small game where a space-ship travels through a layer (doh!), in some situations the spaceship comes close to an enemy space ship, and the whole layer is zoomed in on the two with the zoom level being dependent on the distance between the ship and the enemy. All of this works fine. The main question, however, is how do I keep the zoom being centered on the center point between the two space-ships and make sure that the two are not off-screen? Currently I control the zooming in the GameLayer object through the update method, here is the code (there is no layer repositioning here yet): -(void) prepareLayerZoomBetweenSpaceship{ CGPoint mainSpaceShipPosition = [mainSpaceShip position]; CGPoint enemySpaceShipPosition = [enemySpaceShip position]; float distance = powf(mainSpaceShipPosition.x - enemySpaceShipPosition.x, 2) + powf(mainSpaceShipPosition.y - enemySpaceShipPosition.y,2); distance = sqrtf(distance); /* Distance > 250 --> no zoom Distance < 100 --> maximum zoom */ float myZoomLevel = 0.5f; if(distance < 100){ //maximum zoom in myZoomLevel = 1.0f; }else if(distance > 250){ myZoomLevel = 0.5f; }else{ myZoomLevel = 1.0f - (distance-100)*0.0033f; } [self zoomTo:myZoomLevel]; } -(void) zoomTo:(float)zoom { if(zoom > 1){ zoom = 1; } // Set the scale. if(self.scale != zoom){ self.scale = zoom; } } Basically my question is: How do I zoom the layer and center it exactly between the two ships? I guess this is like a pinch zoom with two fingers!

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Fine-tuning a LNMP stack

    - by Norman
    I'm in the process of setting up a server with 4GB RAM and 2 CPUs. The stack will be CentOS + NGINX + MySQL + PHP (with APC) and spawn-fcgi. It will be used to serve 10 Wordpress blogs, 3 of which receive about 20,000 hits per day. Each Wordpress instance is equipped with the W3 TotalCache. I have a few variables to play with: NGINX (How many worker_processes, worker_connections, etc) PHP (What parameters in php.ini should I change? What about apc?) Spawn-fcgi (Right now I have 6 php-cgi spawned. How many of them should I have?) I realize it's hard to tell without testing, but if you could please provide me with some ballpark numbers, that would be helpful too.

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • How to close all background processes in unix?

    - by Gabi Purcaru
    I have something like: cd project && python manage.py runserver & cd utilities && ./coffee_auto_compiler.py And I want both of them to close on Ctrl-C (or some other command). How can I accomplish that? EDIT: I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean: moon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bash moon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserver jobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened. If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time). And yes, I'm using ubuntu, with bash

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • High Power Consumption and Wakeups on my Asus X54H with 12.04

    - by Marogian
    So I've been using powertop to try and reduce the power consumption on my laptop as I only seem to get about 3 hours of battery. From reading other threads on here it seems my power consumption and wakeups are strangely high, here's a summary: The battery reports a discharge rate of 10.2 W Summary: 651.8 wakeups/second, 0.0 GPU ops/second and 0.0 VFS ops/sec The things which stand out as odd: 1.31 W 4.0 ms/s 166.7 Interrupt PS/2 Touchpad / Keyboard / Mouse So more than 10% of my battery is being consumed by my touchpad/keyboard? That doesn't seem right. 548 mW 34.3 ms/s 45.9 Process compiz 5% from Compiz. Is this correct? 376 mW 1.8 ms/s 47.5 Interrupt [51] i915 298 mW 1.4 ms/s 37.7 Timer tick_sched_timer Another few percent from these things- not quite sure what they are. For reference I've installed Laptop Mode Tools, Jupiter (on power save), the CPU governor is definitely on powersave and brightness is on minimum. What else can I do/Any ideas? I've seen other posts on here reporting laptop battery lives of ~8 hours and power consumption of 4W rather than my 10W... Thanks!

    Read the article

  • Quickbooks 2009 2010

    - by Bronwyn
    I have configured my bank account within quickbooks to import bank statements. I have wxported the bank statement. Then used the convert process available within QB to convert. The file name shows the bsb then some other figures and then the account number. However it will not import. I am wondering how ot make this work. Can I change the file name to match my QB account details and thus enable the importing. This is a technical question. Many thanks Bronwyn

    Read the article

  • Ubuntu doesn't boot due to GRUB-Problems

    - by Dave
    Users out there, I came here with the spark of a hope, that you could help me. I want to get rid of my old WinXP, because the Game-Support for it seems to slowly expire now... So I took a second drive, just an old empty one I had at hands (ATA-Maxtor 90648D3), plugged of the other drive with WinXP, so that it couldn't be harmed, and started the installationof Ubuntu 12.04. Everything went as it was supposed to, until the end. Normal shutdown after successful installation process. But when I tried to boot my new Ubuntu from the HDD, it said: error: out of disk. grub rescue> So, what to do now? I already tried a lot of things in the terminal, e.g. the update-grub as mentioned on http://opensource-sidh.blogspot.de/2011/06/recover-grub-live-ubuntu-cd-pendrive.html. Everything worked, he didn't complain about a missing data or anything, but at the end of the day he still wasn't able to boot! Next step was to change the etc/default/grub-file, so that it could load the ATA-drivers first, so that there is now problem with my drive. But even this didn't seem to have any effect, I'm still stuck with Ubuntu in the Live-CD-Mode... If there was anybody to help me out there, I would be very glad. Thanks for any support, Dave P.S.: I even tried to fix it with boot-repair, a small tool for Ubuntu, and it created a file with data that could probably help you to help me. You can find it on http://paste.ubuntu.com/1428022/

    Read the article

  • I have deleted python files in usr/bin and cant reinstall it

    - by Plonkaa
    I am a novice at Ubuntu and unfortunately i have deleted 3 files in the usr/bin folder python 2.7 python python 2.6 Now my update manager wont work and when i type in python into gnome it says that it is no longer there. Please help me ive tried loads of different things but it just wont work. The closest i got was the following: I typed in sudo apt-get -f install and i thought i had fixed it but then i got a error message - Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-folks-0.6 gir1.2-polkit-1.0 libcogl5 mutter-common gir1.2-json-1.0 libcaribou0 gir1.2-accountsservice-1.0 gir1.2-clutter-1.0 gir1.2-gkbd-3.0 gir1.2-networkmanager-1.0 caribou libcogl-common libmutter0 gir1.2-mutter-3.0 gjs gir1.2-caribou-1.0 libclutter-1.0-0 gir1.2-telepathylogger-0.2 libclutter-1.0-common cups-pk-helper gir1.2-upowerglib-1.0 gir1.2-cogl-1.0 libmozjs185-1.0 gir1.2-telepathyglib-0.12 gir1.2-gee-1.0 libgjs0c gnome-shell-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ubuntu-sso-client The following packages will be upgraded: ubuntu-sso-client 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/57.7 kB of archives. After this operation, 16.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y Setting up python-minimal (2.7.2-7ubuntu2) ... /var/lib/dpkg/info/python-minimal.postinst: 4: python2.7: not found dpkg: error processing python-minimal (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: python-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) any advice is appreciated!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >