Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 441/1953 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Compiling/executing Java on Sublime Text 2 works fine except that it cannot read user input

    - by meiryo
    I am a student learning Java and I want to compile and run some simple Java on ST2. Also Eclipse is very slow on my laptop. Here is my JavaC.sublime-build file so far: { "cmd": ["sublimejavaexec.bat", "$file"], "file_regex": "^(...*?):([0-9]*):?([0-9]*)", "selector": "source.java" } So far it can run code that does not require user input. However when I have something that uses the Java input scanner it either skips through or generates an error. Can anyone suggest a solution such as a plug-in or if ST2 actually has this kind of feature on its console? Thanks.

    Read the article

  • Can't use command line – "command not found" after editing PATH

    - by MEM
    I'm running OS X Mavericks and was trying to install MAMP PRO 2.2. I was trying to configure the PATH variable to have the PHP binaries of MAMP PRO. I added the following line on my ~/.bash_profile file: export PATH=/Applications/MAMP PRO/bin/php/php5.5.3/bin:$PATH As you may notice, since I have MAMP PRO and not just MAMP, I've added a space. As a consequence, I know have the following error each time I run the terminal: -bash: export: `PRO/bin/php/php5.5.3/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin': not a valid identifier Worst: I can't get any command to run, like: ls, clear etc. I always get: "command not found" I don't even know the absolute path for ls. How can I make the commands work again, so that I can properly fix the path I was trying to setup on the .bash_profile file?

    Read the article

  • Kill the Menu......

    - by Curdasss
    Downloaded and installed VM Player 3.0. two questions? 1.) Can I use Windows SMS to install the player on a users computer? If yes. Were can I find this information? 2.) Is there a hack of fix that allows me to kill/remove/hide the Player menus? I don't want the user to build new VM machines, just run the one want them to run. Not allowed to use Workstation or Ace to dist the vm Thanks

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • backing up ntfs disk using rsync on ubuntu

    - by user70366
    For a long time I was using windows. I have a separate drive I use to keep copies of my media files, photos etc. on, which I periodically backup to an external drive. In Windows I used SyncToy to do this. After my Windows stopped booting, I decided to switch to Linux (Ubuntu 10.10). That seems to be going fine, but now I want to backup my drive to the external drive like before. Mostly the two drives will be already the same with maybe about 10GB of extra files added. So I try to use rsync to synchronise the two drives like this: rsync --dry-run -rvlt --modify-window=1 /media/Antonio1TB/Backup /media/FREECOM\ HDD/Backup The problem is the dry run indicates that every file on the drive will be copied. Not just the files I have recently added. What is the correct command to synch two NTFS drives under Ubuntu so that files that already exist don't get copied again? Thanks.

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • how to stop a driver from running - it self protected and rootkit hidden

    - by Aristos
    I have this serous problem For the first time I can not stop a program from running. Something is on one laptop computer that is run as system legacy driver, and self protected and hidden on service as rootkit. Anything I try to remove fails. When a program or anti toolkit try to remove the hidden registry setting for make it stop I get this error : "a device attached to the system is not functioning" So any idea that can help me stop it from running, or even delete it on start up ? My one limitation is that the hard drive is on a laptop and I can not remove it and attact it to somewhere else. This program not let me, touch the registry, do not let me touch the file, do not let me touch the file, The move on boot fail to delete it, the rootrepeal fail to delete it, the rootkiet reveal from sysinternals fail to reveal it ! everything fails. Do how have any experience on this, or do you have any suggestion how to stop this driver from run ?

    Read the article

  • Windows Server 2008 R2 Upgrade via Remote Desktop

    - by Marko
    Is it possible to do an in-place upgrade of Windows Server 2008 R2 to Windows Server 2012 using only Remote Desktop? My plan is to extract WS 2012 installation iso file to C:\WS2012 and run the setup. After restart Remote Desktop connection will be lost, but will it be restored later? Is setup going to automatically install everything, restart, run WS 2012 and start listening for RDP connections? Server is rented and I would like to save the KVM fee. I read here that it's possible to upgrade WS 2008 to WS 2008 R2 like that. Thanks!

    Read the article

  • What issues ensue from having multiple versions of Office installed?

    - by Michael Sorens
    My ultimate question is embodied in the title but I thought it might be helpful to others if I detail what instigated my inquiry and my examination of the problem. To me, the first rule of software updates is Primum non nocere -- first, do no harm. So with my Windows 7 system containing both Office 2003 and Office 2010 I blithely proceeded to install this month's updates from Microsoft, containing updates for both versions of Office. While Microsoft officially does not recommend running multiple versions (see, for example, Running Multiple Versions of Microsoft Excel it is possible; I have had two versions installed for a year or more and have never run into an issue before. One thing that is always mentioned is installation order, i.e., the one you want to open files by default should be installed last. I wanted 2010 as my default so I had indeed installed 2003 first then, years later, 2010. So with this round of Windows updates, either it installed patches to 2010 before 2003, knocking out the file association, or the 2003 patch was more comprehensive, in the sense of touching the file association while the 2010 did not. In any case, after updates, double-clicking a .xls file opened 2003 rather than 2010. Web search indicated either: Use the file associations control panel to re-associate .xls files with the correct version of excel. I looked at this first, but it showed what seemed to be an unversioned "Excel" associated with .xls files so I did not check further. (This turned out to be an error on my part; more later.) Re-install versions in the desired order; I find this unreasonable. Run the repair option of the Office installer on the desired version; still seems more work than one should need. Run excel from the command line with "/regserver" on the one to be the default and "/unregserver" on the other. Good idea, but further search indicated that neither 2007 nor 2010 support "/regserver" contrary to some posts (e.g. Default Program With Multiple Versions Installed). Since this was a Windows Update issue and Microsoft provides free support for such, I inquired there as well, but succeeded only in getting the suggestion to uninstall all other versions, period; not acceptable to me. What worked for me was going back to the file associations control panel and manually selected the Office 2010 version of Excel. While it appeared no different in the control panel, it did fix the double-click issue. So if all it takes is this simple fix after an update, I can live with that. What I am wondering is: Has anyone seen any other problems related to having multiple versions of Office installed?

    Read the article

  • wusa in server 2008 R2

    - by Jack
    I need to use wusa to uninstall a particular update that is causing a reboot loop. I can access winre and dism, but wusa gives command not found. Does wusa not come with r2, or is it not available in the built-in WinRE or what?

    Read the article

  • Scanning php uploads in tmp directory with clamdscan fails

    - by Nikola
    I can't seem to get this thing to work, some permission problem maybe, but i can't even run clamdscan normally form console with root the result is always Permission denied. for example i create a file test.txt (eicar file) in /tmp and execute "clandscan /tmp/test.txt" in console logged in as root and i get "/tmp/test.txt: Access denied. ERROR ". The clamd demon is running with user clamav could that be the reason? Now i want to scan the same file (/tmp/test.txt) via php , so i run (i have chowned the file to apache:apache ) $cmd="clamdscan /tmp/test.txt"; exec($cmd,$a,$b); i get error 127 i try with the full path of the command /usr/bin/clamdscan i get error 126 (command is found but is not executable), this means that apache doesn't have the permission to execute /usr/bin/clamdscan ? what could be the problem?

    Read the article

  • How do I circumvent Spotify's updater for my regular users in Windows?

    - by cros
    I run a Windows Vista machine where I have Spotify installed. My account is the only account with administrator rights on the machine, which means only I can update Spotify. This means that any other user on the machine won't be able to run Spotify if there is an update that I haven't installed. Is there any way to circumvent this? EDIT: I tried setting up a task in the Task Scheduler to start spotify.exe with administrative rights any time someone logged in, which I hoped would allow an update, but it didn't, so I'm open to new suggestions.

    Read the article

  • Notebook with NVIDIA Optimus not switching video card in games

    - by user140739
    I have a Samsung RC720 notebook with Intel Integrated Graphics and NVIDIA GeForce GT 520M. As you can see it has two video adapters and Optimus is supposed to switch between them. But when I choose dedicated GPU in NVIDIA Control Panel and try to run, for example, GTA IV, it uses integrated graphics and I get very poor performance. I have already installed last NVIDIA and notebook drivers, chose high-performance in NVIDIA Control Panel, tried to execute with "Run with graphics processor..." context option and so on. Thanks for help.

    Read the article

  • Troubleshooting a slow database server with no load

    - by user1721724
    I'm getting ready to soft launch my website and I've run into some problems with what I think is being caused by my MySQL database running on Fedora. All websites run fine, just as I'd expect, but any pages that establish a database connection hang until the connection is established, and then bang, the site loads as it should. Ex. My landing page (http://www.thrusong.com) doesn't make a database connection and loads quickly. User profile pages (http://www.thrusong.com/john) make a database connection and load slowly, even though most of the data comes from memcached and the database currently has no load on it. This problem just came up yesterday when my router died and I began using my Pace 2Wire modem with built-in router. Before, my old router was set to handle everything. My ISP says the settings in the modem are correct. Any ideas? Thanks in advance.

    Read the article

  • How do I make a privileged port non-privileged in Redhat 5?

    - by Jason Thompson
    So I have a RedHat 5 box that I'm wanting to run an application that I wrote that implements SLP. SLP uses port 427 for answering service queries. My understanding is that ports below 1024 are "privileged" and thus cannot be bound to by anyone that's not root. I cannot run this application as root as it is launched via tomcat. One creative solution I really like was simply writing an iptables rule to route the privileged port to a non-privileged. In my proof of concept tests, this works wonderfully. Unfortunately, it would be greatly (and understandably) desired by the powers if my application did not require screwing around with iptables upon installation. So I heard a rumor and cannot find anything to verify this that there was some sort of command or parameter that could be set to make any port I want be non-privileged. Is this true? If so, how is this done? Thanks!

    Read the article

  • Address already in use - Amazon AWS

    - by Peter
    I've run into a really weird issue. I was debugging a server 500 error script on our EC2 instance and found that we didn't have ioncube loaders installed. So I went to go install them and I created a new file at /etc/php.d/zend.ini and initially I inserted the value of extension=/usr/local/ioncube/ioncube_loader_lin_5.3.so and restarted httpd at which point it told me: The ionCube Loader is a Zend-Engine extension and not a module Please specify the Loader using 'zend_extension' in php.ini PHP Fatal error: Unable to start ionCube Loader module in Unknown on line 0 So I changed the contents of zend.ini to zend_extension=/usr/...etc. Now when I attempt to restart httpd I get this error: Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I can't even run /etc/init.d/httpd stop without it erroring. I've since removed zend.ini to see if that's what caused it and it doesn't seem to be. Any ideas?

    Read the article

  • "Installing" GD for PHP

    - by gbuckingham89
    I'm new to server admin & Linux and have just got a VPS running CentOS 6. Apache, MySQL and PHP all came installed (along with cPanel and WHM), however I'm now also trying to install the GD library. I've run "yum install php-gd" and it installed ok. If I run it again I get "Package php-gd-5.3.2-6.el6_0.1.x86_64 already installed and latest version". However, when I do a phpinfo() or from the command line "php -m" there is no mention of GD. Is there anything else I need to do?

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • Is there a way to control two instantiated systemd services as a single unit?

    - by rascalking
    I've got a couple python web services I'm trying to run on a Fedora 15 box. They're being run by paster, and the only difference in starting them is the config file they read. This seems like a good fit for systemd's instantiated services, but I'd like to be able to control them as a single unit. A systemd target that requires both services seems like the way to approach that. Starting the target does start both services, but stopping the target leaves them running. Here's the service file: [Unit] Description=AUI Instance on Port %i After=syslog.target [Service] WorkingDirectory=/usr/local/share/aui ExecStart=/opt/cogo/bin/paster serve --log-file=/var/log/aui/%i deploy-%i.ini Restart=always RestartSec=2 User=aui Group=aui [Install] WantedBy=multi-user.target And here's the target file: [Unit] Description=AUI [email protected] [email protected] After=syslog.target [Install] WantedBy=multi-user.target Is this kind of grouping even possible with systemd?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • How can I prevent users from installing software?

    - by Cypher
    Our organization is a bit different than most. During certain times of the year, we grow to thousands of employees, and during off-times, less than a hundred. Over the course of a few years, many thousands of people have come and gone in our offices, and left their legacy behind in the form of all sorts of unwanted, unapproved, (and sometimes unlicensed) software installs on our desktops. We are currently installing redundant domain controllers and upgrading current servers, all running Windows Server 2008 Enterprise, and will eventually be able to run a pure 2008 DC network. With that in mind, what are our options in being able to lock down users, such that they cannot install unauthorized software on systems without the assistance (or authorization) of the IT group? We need to support approximately 400 desktops, so automation is key. I've taken note of the Software Restrictions we can implement via Group Policy, but that implies that we already know what users will be installing and attempting to run... not quite so elegant. Any ideas?

    Read the article

  • Excel Help: Macro is not Cooperating with Quotations!!

    - by B-Ballerl
    Hi all, I Have a macro containing a line that will change the formula of a cell using R1C1 formula type. The formula is: ActiveCell.FormulaR1C1 = _ "=IF(R[0]C[-2]=0,"",(R[0]C[-20]-R[0]C[-16]))" When ever I attempt to run the macro it always comes up with a dialog box saying Run-time error '1004': Application-defined or object-defined error. And when you click debug it highlights those 2 lines in the macro. And I can't figure out how to fix it. Can anyone help?

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >