Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 685/1080 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • "System call failed" error when trying to open "My Computer" etc. under the Start menu! What is happening?

    - by verve
    When I go to the start menu I can load program icons but if I click on Documents, My Pictures, My Computer, Default Programs...all the options on the right part--I get "System call failed". How do I solve ths? Is my HD failing? Also, I don't know if it's connected but yesterday my uTorrent stopped working in a way it has never done before. I'm not able to download torrent files. In the program I get "socket unreacheable..." Also, for the last 2 days my internet has been super-slow. I checked Kaspersky for viruses. It says: "No active threats". Windows 7 64-bit.

    Read the article

  • Just how slow should my VPN be?

    - by David Heggie
    I have a VPN setup between a remote office with 2 ADSL connections (8Mb downstream, 512Kb upstream) and a main office with a 10Mb EFM connection (10Mb both up and down). The VPN is an IPSec connection using a DrayTek 2930 router for the ADSL and a DrayTek 3200 router at the EFM end. However, I'm unable to get speeds over this connection (tested with iperf) of anything over 600Kb or so down from the main office (traffic will pretty much always be from the main office to the remote office) Whilst I realise that there is an overhead and I'm never going to get the "full" bandwidth available over this VPN, I'd like to think that there's something I should be looking at that may help improve it. I've tried using the DrayTek's built-in "VPN Trunking" features which are supposed to load-balance connections, but this doesn't seem to improve matters much. I guess my question is - is this the kind of performance I should expect from this kind of setup and I'll just have to live with it, or should I be able to squeeze something more out of it through some VPN magic?

    Read the article

  • nginx hashing on GET parameter

    - by Sparsh Gupta
    I have two Varnish servers and I plan to add more varnish servers. I am using a nginx load balancer to divide traffic to these varnish servers. To utilize maximum RAM of each varnish server, I need that same request reaches same varnish server. Same request can be identified by one GET parameter in the request URL say 'a' In a normal code, I would do something like- (if I need to divide all traffic between 2 Varnish servers) if($arg_a % 2 == 0) { proxy_pass varnish1; } if($arg_a % 2 == 1) { proxy_pass varnish2; } This is basically doing a even / odd check on GET parameter a and then deciding which upstream pool to send the request. My question are- What is the nginx equivalent of such a code. I dont know if nginx accepts modulas Is there a better/ efficient hashing function built in with nginx (0.8.54) which I can possibly use. In future I want to add more upstream pools so I need not to change %2 to %3 %4 and so on Any other alternate way to solve this problem

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • vps running out of memory, 200MB free

    - by demon
    At the beginning of this year I took a VPS for my website because I was running against the resource limits from a shared hosting. Here are the things I know: 2GB memory, with 1GB swap Debian X64 server ED installed Software running on the webserver: mysql apache postfix pop3 imap amavisd clamd cron fail2ban munin-node pure-ftpd spamd nginx Now for the setup: Nginx listens on port 80 and handles the static files, the php side is done by apache2 running mod_php in combi with apc(no var caching!). Iam using a pretty 'busy' drupal and phpbb stack on the server, for drupal iam using boost and authcache to handle of the server load with a pressflow stack. phpbb is just phpbb3 with some mods installed, but has at max 30 users online at a time.. The problem is that its staring to use the swap after a few days after a reboot and thus the site becomes slower. I'v added pictures of monit and munin, so maybe somebody can help me out... Monit: Munin:

    Read the article

  • Performance impact of Zones.

    - by nospam(at)example.com (Joerg Moellenkamp)
    I was really astonished when i saw this question. Because this question was a old acquaintance from years ago, that i didn't heard for a long time. However there was it again. The question: "What's the overhead of Zones?". Sun was and Oracle is not saying "zero". We saying saying minimal. However during all the performance analysis gigs on customer systems i made since the introduction of Zones i failed to measure any overhead caused by zones. What i saw however, was additional load intoduced by processes that wouldn't be there when you would use only one zone Like additional monitoring daemons, like additional daemons having a controlling or supervising job for the application that resulted in slighly longer runtimes of processes, because such additional daemons wanted some cycles on the CPU as well. So i ask when someone wants to tell me that he measured a slight slowdown, if he or she has really measured the impact of the virtualization layer or of a side effect described above. It seems to be a little bit hard to believe, that a virtualisation technology has no overhead, however keep in mind that there is no hypervisor and just one kernel running that looks and behaves like many operating system instances to apps and users. While this imposes some limits to the technology (because there is just one kernel running you can't have zones with different kernels versions running ... obvious even to the cursory observer), but that is key to it's lightweightness and thus to the low overhead. Continue reading "Performance impact of Zones."

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • SDL: Clipping a Sprite Sheet from Left to Right

    - by 0X1A
    I'm trying to get a sprite sheet clipped in the right order but I'm a bit stumped, every iteration I've tried has tended to be in the wrong order. This is my current implementation. Frames = (TempSurface-h / ClipHeight) * (TempSurface-w / ClipWidth); SDL_Rect Clips[Frames]; for (i = 0; i < Frames; i++) { if (i != 0 && i % (TempSurface-h / ClipHeight) == 0) ColumnIndex++; Clips[i].x = ColumnIndex * ClipWidth; Clips[i].y = i % (TempSurface-h / ClipHeight) * ClipHeight; Clips[i].w = ClipWidth; Clips[i].h = ClipHeight; Where TempSurface is the entire sheet loaded to a SDL_Surface and Clips[] is an array of SDL_Rects. What results from this is a sprite sheet set to SDL_Rects in the wrong order. For example a sheet of dimensions 4x4 would load desirably as this: | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | | 8 | 9 | 10| 11| | 12| 13| 14| 15| But would be set as this order: | 0 | 4 | 8 | 12| | 1 | 5 | 9 | 13| | 2 | 6 | 10| 14| | 3 | 7 | 11| 15| What should I be doing for these to be set correctly?

    Read the article

  • Distributing my Application inside a Debian Virtual Machine Image-- How to meet GPL obligations?

    - by bdk
    I have a Linux application I've developed, and I have created a standalone VMWare Image that people can download to try out the application without needing to install and configure a Linux Server. I created this VMWare Image by starting with a base Debian system, installing a bunch of packages and then configuring all the packages and daemons my application depends on. Upon load, the VMWare Image boots right into an X Server running only my application and no Window manager, so its more of a "Virtual Appliance" than a normal Linux Desktop environment. Users generally will never see a command prompt or any application other than my own. (My application itself I have a handle on the licensing issues of) Now I would like to distribute this image, but I'm not sure how to meet my GPL (and other licenses the various Debian components are released under) Obligations. As I understand it, I have two primary obligations to meet. Providing Copyright and License Information for each component I use. As I understand it, all the information I am required to present is located in the /usr/share directory in the Debian, but since my users generally will never touch a console or terminal, they will never see this. Does providing a text file containing a concatenation of all the files inside /usr/share meet this obligation Making source code available for all components I distribute. Since I am not creating the image from source, but from binary packages, I can't provide the actual source code that results in exactly my image being generated. Does providing an ftp mirror and an offer to send that mirror on DVDs of the Debian source debs for all the packages I use meet this obligation? Anything Else I'm required to do to legally distribute this image?

    Read the article

  • Which modules can be disabled in apache2.4 on windows

    - by j0h
    I have an Apache 2.4 webserver running on Windows. I am looking into system hardening and the config file httpd.conf. There are numerous load modules and I am wondering which modules I can safely disable for performance and / or security improvements. Some examples of things I would think I can disable are: LoadModule cgi_module others like LoadModule rewrite_module LoadModule version_module LoadModule proxy_module LoadModule setenvif_module I am not so sure they can be disabled. I am running php5 as a scripting engine, with no databases, and that is it. My loaded modules are: core mod_win32 mpm_winnt http_core mod_so mod_access_compat mod_actions mod_alias mod_allowmethods mod_asis mod_auth_basic mod_authn_core mod_authn_file mod_authz_core mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_dav_lock mod_dir mod_env mod_headers mod_include mod_info mod_isapi mod_log_config mod_cache_disk mod_mime mod_negotiation mod_proxy mod_proxy_ajp mod_rewrite mod_setenvif mod_socache_shmcb mod_ssl mod_status mod_version mod_php5

    Read the article

  • How to identify which website on my instance is receiving lots of traffic?

    - by Bob Flemming
    I am new to server administration and have just setup a new quad core instance which hosts around 15 websites. Over the past couple of days my server load has been averaging at around 15.00. I believe it is because of one (or maybe more) websites are getting spammed by spambots. Typing 'top' at the command line shows many processes from user 'www-data' which indicates lots of web traffic. Is there an easy way identify which one of my sites is taking a hammering? Reading the apache error logs is a very difficult tasks as most of the websites receive daily traffic of 10,000 + unique users. Any help would be appreciated!

    Read the article

  • How to control messages to the same port from different emitters?

    - by Alex In Paris
    Scene: A company has many factories X, each emits a message to the same receive port in a Biztalk server Y; if all messages are processed without much delay, each will trigger an outgoing message to another system Z. Problem: Sometimes a factory loses its connection for a half-day or more and, when the connection is reestablished, thousands of messages get emitted. Now, the messages still get processed well by Y (Biztalk can easily handle the load) but system Z can't handle the flood and may lock up and severely delay the processing of all other messages from the other X. What is the solution? Creating multiple receive locations that permits us to pause one X or another would lose us information if the factory isn't smart enough to know whether the message was received or not. What is the basic pattern to apply in Biztalk for this problem? Would some throttling parameters help to limit the flow from any one X? Or are their techniques on the end part of Y which I should use instead ? I would prefer this last one since I can be confident that the message box will remember any failures, which could then be resumed.

    Read the article

  • Correct permissions for /var/www and wordpress

    - by dpbklyn
    Hello and thank you in advance! I am relatively new to ubuntu, so please excuse the newbie-ness of this question... I have set up a LAMP server (ubuntu server 11.10) and I have access via SSH and to the "it works" page from a web browser from inside my network (via ip address) and from outside using dyndns. I have a couple of projects in development with some outside developers and I want to use this server as a development server for testing and for client approvals. We have some Wordpress projects that sit in subdirectories in /var/www/wordpress1 /var/www/wordpress2, etc. I cannot access these sub directories from a browser in order to set up WP--or (I assume) to see the content on a browser. I get a 403 Forbidden error on my browser. I assume that this is a permissions problem. Can you please tell me the proper settings for the permissions to: 1) Allow the developers and me to read/write. 2) to allow WP set up and do its thing 3) Allow visitors to access the site(s) via the web. I should also mention that the subfolder are actually simlinks to folder on another internal hdd--I don't think this will make a difference, but I thought I should disclose. Since I am a newbie to ubuntu, step-by-step directions are greatly appreciated! Thank you for taking the time! dp total 12 drwxr-xr-x 2 root root 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 root root 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 root root 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 root root 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 root root 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress Here is the result of using chown -R www-data:www-data /var/www total 12 drwxr-xr-x 2 www-data www-data 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 www-data www-data 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 www-data www-data 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 www-data www-data 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 www-data www-data 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress I am still unable to access via browser...

    Read the article

  • Most secure way to have IPtables auto-loaded using Debian / Linux

    - by networkIT
    I'd like to know the safest way to load iptables using Debian. Of course, I can use a script that uses iptables-restore : #!/bin/sh iptables-restore < /etc/firewall.conf but : 1) where is the safest place to have it loaded ? /etc/network/if-up.d ? I'm concerned about the script being loaded early enough at boot time, and reliably enough when plugging/unplugging interfaces ... 2) is this script method using iptables-restore the most secure way ? 3) additionnally, how much does the answer validity stretch to other Linux distros ( Ubuntu, Fedora, CentOS ) ? Thanks ^^

    Read the article

  • Multi-Part Map Troubleshooting

    - by Michael Stephenson
    Scenario I came across a nice little one with multi-part maps the other day. I had an orchestration where I needed to combine 4 input messages into one output message like in the below table:   Input Messages Output Messages Company Details Member Details Event Message Member Search Member Import   I thought my orchestration was working fine but for some reason when I was trying to send my message it had no content under the root node like below <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange>   My map is displayed in the below picture. I knew that the member search message may not have any elements under it but its root element would always exist. The rest of the messages were expected to be fully populated. I tried a number of different things and testing my map outside of the orchestration it always worked fine. The Eureka Moment The eureka moment came when I was looking at the xslt produced by the map. Even though I'd tried swapping the order of the messages in the input of the map you can see in the below picture that the first part of the processing of the message (with the red circle around it) is doing a for-each over the GetCompanyDetailsResult element within the GetCompanyDetailsResponse message. This is because the processing is driven by the output message format and the first element to output is the OrganisationID which comes from the GetCompanyDetailsResponse message. At this point I could focus my attention on this message as the xslt shows that if this xpath statement doesn’t return the an element from the GetCompanyDetailsResponse message then the whole body of the output message will not be produced and the output from the map would look like the message I was getting. <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange> I was quickly able to prove this in my map test which proved this was a likely candidate for the problem. I revisited the orchestration focusing on the creation of the GetCompanyDetailsResponse message and there was actually a bug in the orchestration which resulted in the message being incorrectly created, once this was fixed everything worked as expected. Conclusion Originally I thought it was a problem with the map itself, and looking online there wasn’t really much in the way of content around troubleshooting for multi-part map problems so I thought I'd write this up. I guess technically it isn't a multi-part map problem, but I spend a good couple of hours the other day thinking it was.

    Read the article

  • So what is Active GridLink for RAC?

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 I had referred to Active GridLink for RAC in my blog yesterday and since then got several questions on this topic. So I decided to re-visit Active GridLink. With the release of version 11g, Oracle WebLogic Server started to provide strong support for the Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximizes both connection performance and availability. WebLogic is the only application server in the marketplace which has been fully integrated and certified with Oracle Database RAC 11g without losing any rich functionality. Active GridLink provides Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. With the key foundation for providing deeper integration with Oracle RAC, this single data source implementation in Oracle WebLogic Server supports the full and unrestricted use of database services as the connection target for a data source. For more details and to understand how our customer NEC leverages this capability, read the whitepapers on this topic. Get in depth ‘how-to’ details from this youtube video from our resident expert, Frances Zhao.

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

  • How do I run vino-server without a monitor attached in Ubuntu 10.04

    - by Ole
    I just upgraded to Ubuntu 10.04 yesterday on a headless home server. I use the server for a variety of purposes, and what I don't know how to do via SSH I've alway been able to do through VNC. However, since the upgrade vino-server will no longer run if there isn't a monitor attached. Before it used to start up without a problem. Even by attempting to run the server via SSH gives me a "could not load display" error. Summary: I need to get vino-server running at boot time on a server with Ubuntu 10.04, without a monitor attached.

    Read the article

  • Javascript is not loading

    - by Oden
    Hey, I've got a problem with JavaScript under Ubuntu, that drives me crazy. I'm using Gedit for my web sites since I'm an Ubuntu user. When I start a new website I create (usually with the gnome terminal) folder structure, and I copy the files I need into them. The next step is creating an index.html where I build the design and basic JavaScript functionality. JavaScript is stored in a sub-folder of the project and when i try to load one using the tag in the header, my whole page body disappears. If the source contains a script tag with its own body, and its not the first its code wont run. I've tried to solve the problem by setting chmod to 777 with sudo chmod -R 777 . but nothing changed. CSS is loading correctly, but JS isn't. I'm using the newest version of apache, no mod_rewrite stuff, but i get the same problem when I run the html from file (file:///...) Do anyone know how to solve this problem?

    Read the article

  • Freeze on Boot WindowsXP

    - by kyle
    Ok first off I know I should have Windows 7 but i don't Its a really old computer and i'm not sure if it can run the latest OS of ubuntu but i'm trying any whay so here are my specs or what i could have gathered from the computer(viruses are every where and most of the drivers are gone) Specs OS WindowsXP(it isn't letting me know which one that is blocked) Graphics card(can't access device manger it is blocked also) graphics card is 2003-2004 era though ram 512mb last time i checked USB(Drivers Gone) Wireless Card(Drivers Gone) Lan Card(Drivers Gone) Audio(Drivers Gone) CD Drive (it runs and works but i can't check its properties that's blocked to) Computer Dell Inspiron 600m Good News I have the Driver CD for everything Bad News I Don't have the OS CD I know the Computer is trash but i want to flash it anyway and just install linux(ubuntu) The second problem is it doesn't even want to load the CD all the way it is just stuck at the Ubuntu Screen with all the bubbles orange and if i resart it just does it again. any help would be nice. EDIT: okay i tried the first comment and it said i need a firmware file from http://wireless.kernel.org/en/users/Drivers/b43#devicefirmware the firmware is b43/ucode5.fw, b43-open/ucode5.fw is there any other way of installing this firmware without being in the Ubuntu because what I have read is I need to be in Ubuntu to do this... Thanks for all the help you are giving me i have read your comments but i don't want to wait another two hours downloading that if i can get it working on this...

    Read the article

  • Connection reseted- error 101

    - by Maja
    I have a problem with internet connection- when i try to load website, it will always write this error:Error 101 (net::ERR_CONNECTION_RESET): Connection was reseted. I'm using Win7 64 bit and I have this router: asus rt n10. First, i tried to change MTU from 1504 to 1472 and it worked for a while, but yesterday it started again. Now I have MTU 1440 but I don't want to lower it more. Is there another solution? Btw I have no malware in my laptop (I used the avast scan) and I've also deleted cookies and disabled proxy server (I'm not using any).

    Read the article

  • Psexec issue when running an application on a Windows Server 2008 R2 machine from a 2003 R2 machine

    - by Vermin
    I am trying to run an application on a Windows Server 2008 R2 machine from a Windows Server 2003 R2 machine using a batch file with the following line of code in a batch file: psexec \\nightmachine -u DOMAIN\User -p Password -i "C:\FilePath\Application.exe" argument1 argument2 The application fails to run correctly when started using psexec, but the application will run correctly if I have logged into the nightmachine with the same user and started it from its file path via cmd. I have been able to get hold of the error returned in the application from its log and the exception returned is the following: System.DllNotFoundException: Unable to load DLL 'rasapi32.dll': A dynamic link library (DLL) initialization routine failed. (Exception from HRESULT: 0x8007045A) After searching for that error code on the net, there are a lot of posts saying that this is caused by file corruption, but I cant see why that would be the case as the application will run normally when not being run from psexec. (the user is an administrator on both machines) Can anyone please help me on this? If any more information is needed to help solve this issue then please ask and I will do my best to post it.

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >