Search Results

Search found 23811 results on 953 pages for 'javax script'.

Page 697/953 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • Can't change read only folder in windows 7

    - by James Drinkard
    I'm trying to run a Spring MVC 2.5 tutorial and when I run the ant script for a deploy, I get this error: deploy: [copy] Copying 2 files to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp BUILD FAILED C:\projects\workspace\springapp\build.xml:46: Failed to copy C:\projects\workspace\springapp\war\WEB-INF\web.xml to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml due to failed to create the parent directory for C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml After reviewing the directory: springapp I saw the properties as read-only. No problem I thought as I'm logging in as administrator. However, changing the uac settings, going to a command prompt as admin and then trying to change the properties of the folder with attrib, making me the owner of the folder, changing the security settings etc... did nothing. I can't seem to change this folder to anything. So my question is, how do I change the settings on that folder so ANT can make changes to that folder?

    Read the article

  • Run ClickOnce Application from CLI

    - by Badger
    I am working on an auto-install script for where I work and we have a ClickOnce type application we use from a vendor. I have looked into it and we can't automate the install but we would like to be able to at least start the install automatically. I have tried rundll32.exe dfshim.dll,ShOpenVerbApplication "%SOFTWARE%\ToolsApp.application" but it gives me an error about an invalid URI. What would probably be the easiest is to use whatever program Windows has (Windows XP in our case) to run the default "handler" for the file. I don't know if any such thing exists, but that is what comes to mind.

    Read the article

  • Linux: Tool to monitor every process, execute-command, shortly, monitor what's happening at the moment

    - by Bevor
    Hello, due to a freeze problem of my Ubuntu 10.10 (it is not isolatable) I though about logging every executable of the kernel somehow in any file to see what happens last when a freeze occures the next time to not lose valuable information. I found acct but this is obviously not what I'm looking for. Actually it logs just user commands and those things. I need something which logs in a much "deeper" level. The best would be some kind of script which records every interrupt. Does anybody know some tool like that?

    Read the article

  • How to cleanup tmp folder safely on Linux

    - by Syncopated
    I use RAM for my tmpfs /tmp, 2GB, to be exact. Normally, this is enough but sometimes, processes create files in there and fail to cleanup after themselves. This can happen if they crash. I need to delete these orphaned tmp files or else future process will run out of space on /tmp. How can I safely garbage collect /tmp? Some people do it by checking last modification timestamp, but this approach is unsafe because there can be long-running processes that still need those files. A safer approach is to combine the last modification timestamp condition with the condition that no process has a file handle for the file. Is there a program/script/etc that embodies this approach or some other approach that is also safe? Incidentally, does Linux/Unix allow a mode of file opening with creation wherein the created file is deleted when the creating process terminates, even if it's from a crash?

    Read the article

  • ssh freezes when trying to connect to some hosts

    - by NS Gopikrishnan
    When I try to ssh to particular machine/s in a list, The SSH command happens to be freezing. I tried out setting ssh timeout. But then also it's freezes even after the timeout. In verbose mode : OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to x358.x.server.com [10.x.x.x] port 22. debug1: fd 3 clearing O_NONBLOCK debug1: Connection established. debug1: identity file /export/home/sqlrpt/.ssh/identity type -1 debug1: identity file /export/home/sqlrpt/.ssh/id_rsa type -1 debug1: identity file /export/home/sqlrpt/.ssh/id_dsa type 2 At this point it freezes. A work around I thought was to create a child process for each ssh calls and if the process doesn't respond after a timeout - Kill it. But are there any less complex ways, so that I can accommodate it in a shell script itself rather than going for a C/C++ program ?

    Read the article

  • HUGE MAC FILTER and scripting

    - by user195917
    I make an dhcp server on CentOS, and i apply a mac filter for my clients. Now, with a small number of clients (max 10) ,is not that hard, but what I will do with 2000 clients? My idea was to create a list (ex. "macfilter.lst") and this list, to be updated after a database. I have tow questions. First: How do i create a filter in IPTABLES that takes info`s from a file (file hosted on server) Second: Any idea about how to write a script, that update a file after a database?? Thanks so much for your help.

    Read the article

  • Modify PATH variable for X11 during log-in

    - by user1028435
    I am working on some lab computers (read: no administrative rights) that, if I log in, I need to change the PATH variable as X11 starts. The reason is that I need to change the PATH variable at this time, as opposed to later, is that the Print Screen command seems to "bind" during login (forgive my bad explanation of this). Currently, I have a .bashrc script as a workaround: #!/bin/bash export PATH=/home/username/bin:$PATH I can make it work by starting a new X, but I was wondering if it is possible to change upon login. cat /etc/redhat-release tells me my distribution is: Red Hat Enterprise Linux Client release 5.8 (Tikanga)

    Read the article

  • loss of sound in ubuntu 12.04

    - by Leo Simon
    I'm running Linux E6520 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux on a Dell Latitude E6530. (This is a new machine; have run the same version of linux on an older machine for a year, without this happening.) I've been losing sound regularly, though have not been able to isolate the trigger for this. I've scoured the web on this subject, in particular https://help.ubuntu.com/community/SoundTroubleshootingProcedure and Audio stopped working suddenly in 12.04 Nothing from the first site seemed to work for me. From the second site, I learned enough to be able to fix the problem when it happens, but nothing on the web has helped me figure out why the problem is happening in the first place. Patching together stuff from the web, and with some blind luck, I've found that the following steps seem to restore sound pulseaudio --kill pulseaudio --start pavucontrol -> output devices Click on the "Mute audio" icon, which mutes audio Click on the "Mute audio" icon, which unmutes audio. This obviously doesn't make sense: audio wasn't muted in the first place, but somehow, magically, toggling mute audio off and on seems to reset something. Can anybody suggest from this information why sound would be disappearing in the first place (it seems as though something is getting muted at the system level, but I don't know what)? a simpler (command-line/script) way of restoring sound, in particular, is it possible to reset pavucontrol from the commandline? Some other pieces of information that may be of use: The problem is clearly happening at the system level, since I've set up a clean new user, and this user has the same problems that I do. So user fixes like deleting the .pulse directory aren't (and don't) help. Sound works fine in Windows (dual-boot) so it's not a hardware problem Any help/suggestions on this would be most appreciated.

    Read the article

  • Automatically copy files out of directory

    - by wizard
    I had a user's laptop stolen recently during shipping and it was setup with windows live sync. The thief or buyer's kids took some photos of themselves and they were synced to the user's my documents. I had just finished moving the users files out of the synced my documents folder when I noticed this. Later they took some more photos and a video. I wrote up a batch script to copy files out synced directory every 5 minutes into a dated directory. In the end I ended up with a lot of copies of the same few files. Ignoring what windows livesync offers (at the time there was no way to undelete files - I've moved onto dropbox so this ins't really an issue for me) what's the best way to preserve changes and files from a directory? I'm interested in windows solutions but if you know of a good way on a *nix please go ahead and share.

    Read the article

  • How do I install the Firestorm viewer for Second Life?

    - by Cordenne
    I am new to Ubuntu and trying to set everything up. I am VERY bad at doing that at the moment. In fact, I asked another question here only a few hours ago. Anyways, I am trying to get the Firestorm Viewer for Second Life. I followed instruction given here: http://michaelferrie.blogspot.com/2012_04_01_archive.html and came up with these end results: cordenne@ubuntu:~$ sudo apt-get install ia32-libs [sudo] password for cordenne: Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. The following packages were automatically installed and are no longer required: libnspr4-0d:i386 libgconf2-4:i386 libnss3-1d:i386 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded. cordenne@ubuntu:~$ '/home/cordenne/install.sh' You are not running as a privileged user, so you will only be able to install the Firestorm Viewer in your home directory. If you would like to install the Firestorm Viewer system-wide, please run this script as the root user, or with the 'sudo' command. Proceed with the installation? [Y/N]: Y - Installing to /home/cordenne/firestorm cp: cannot copy a directory, `/home/cordenne/firestorm', into itself, `/home/cordenne/firestorm/firestorm' Failed cordenne@ubuntu:~$ cordenne@ubuntu:~$ So, still no Firestorm. Can anyone help. PS: When it said - Installing to /home/cordenne/firestorm I felt it was talking to long to... I guess do anything so I pressed 'Enter'. I don't know if that made a difference but if it does, now you know!

    Read the article

  • How to set Outlook 2010 to use signatures outside of the default signature folder?

    - by Gregory MOUSSAT
    With Outlook before the 2010 version, it was possible to specify any path for the signatures. With Outlook 2010, the only way is to use those stored into C:\Documents and Setting\UserName\Local Settings\Application Datas\Microsoft\Signature\ I'd like to point the signatures to a network share. Allowing us to modify the signatures into the share, instead of login on every computers each time we are asked to modify them (and this is quite often because the signatures contain logos about current events). We currently use a script to copy the signatures from the share to the local disk when users login.

    Read the article

  • 'Singleton' application - or let the user only launch one instance of a program at the time

    - by Disco
    I'm running a few linux desktops; mainly for kids (yeah, trying to teach them the right OS at early stage) (running Ubuntu 10.10, Gnome) The problem is that they found very funny to make their workstations (actually, old 512 Mb pentium 4) by launching thousands of firefox instances. I'm looking for a way to restrict them to launch 'N' instances of a particular application. Haven't figured yet how. Thought of a monitoring daemon but I think that would be too ressources hungry. Any idea of a script/trick to achieve this ? Note: i might have 1-2 level of users (the kids, and the more grown up kids) so i have also to limit per user; something like user1: 3firefox, user2: 2firefox instances.

    Read the article

  • What is the best way to make Calculate SHA1 as a context menu option in Mac OS X?

    - by Andrei
    In order to calculate the SHA1 checksum of a downloaded file, I could type /usr/bin/openssl sha1 in Terminal and then drag there the file which I want check. To make it simpler, one could enable a Context Menu item for this action. What is the best way to create such item in Mac OS X 10.6? A detailed answer is appreciated, because I don't have good experience with AppleScript, etc. Step by step Open Automator Create new service Choose to receive selected Files and Folders in Finder Add action Run Shell Script where your bash command is /usr/bin/openssl sha1 "$@" and you pass input as arguments How can I get the output? Preferably in a Growl pop-up or a message window/dialog.

    Read the article

  • Is it possible to know a user logged in on Ubuntu instantly?

    - by Mustafa Orkun Acar
    In fact, I am trying to restrict access to some websites for different users. I asked the question: Restrict access to some websites for different users. The given answer is ok; but as the owner of answer says, it works if users are locally logged in. That is; if the user logs out and logs in, restrictions are no more valid. So, I decided to run a script including the iptables commands for restrictions at every log in event. I want to know whether it is possible to know instantly the user logs in.

    Read the article

  • How to restrict ssh port forwarding, without denying it?

    - by Kaz
    Suppose I have created an account whose login shell is actually a script which does not permit an interactive login, and only allows a very limited, specific set of commands to be remotely executed. Nevertheless, ssh allows the user of this account to forward ports, which is a hole. Now, the twist is that I actually want that account to set up a specific port forwarding configuration when the ssh session is established. But it must be impossible configure arbitrary port forwarding. (It is an acceptable solution if the permitted port forwarding configuration is unconditionally established as part of the every session.)

    Read the article

  • Trac/SVN to DVCS Migration

    - by quanticle
    The project I'm currently working on is using Trac, with SVN integration. It's worked great until now. Now, however, we've taken on some additional developers and we're running into issues with branching and merging. Because of this, I think a move to a distributed version control system is in order. The problem is that Trac is very closely integrated with the SVN repository. We have tight integration between the tickets and the revision numbers of code changes corresponding to those tickets. In addition we have a support wiki that has a lot of data that helps the tech. support team. Is there a way we can migrate to git or mercurial without losing the benefits of Trac? I've looked at the git plugin for Trac, and I'm unsure of how well it works. Has anyone here used it with a project that's been migrated from SVN? EDIT: I should note that the most important priority for us is maintaining the links between Trac tickets and the corresponding changesets in SVN. That's a tool that we use every day, and it provides an easy way to jump to code changes when reviewing tickets. Wiki migration would be nice to have, but if it's not possible, we can continue to run the old system whilst we write some kind of a one-off script to migrate the content.

    Read the article

  • Building a List of All SharePoint Timer Jobs Programmatically in C#

    - by Damon
    One of the most frustrating things about SharePoint is that the difficulty in figuring something out is inversely proportional to the simplicity of what you are trying to accomplish.  Case in point, yesterday I wanted to get a list of all the timer jobs in SharePoint.  Having never done this nor having any idea of exactly how to do this right off the top of my head, I inquired to Google.  I like to think my Google-fu is fair to good, so I normally find exactly what I'm looking for in the first hit.  But on the topic of listing all SharePoint timer jobs all it came up with a PowerShell script command (Get-SPTimerJob) and nothing more. Refined search after refined search continued to turn up nothing. So apparently I am the only person on the planet who needs to get a list of the timer jobs in C#.  In case you are the second person on the planet who needs to do this, the code to do so follows: SPSecurity.RunWithElevatedPrivileges(() => {    var timerJobs = new List();    foreach (var job in SPAdministrationWebApplication.Local.JobDefinitions)    {       timerJobs.Add(job);    }    foreach (SPService curService in SPFarm.Local.Services)    {       foreach (var job in curService.JobDefinitions)       {          timerJobs.Add(job);       }     } }); For reference, you have the two for loops because the Central Admin web application doesn't end up being in the SPFarm.Local.Services group, so you have to get it manually from the SPAdministrationWebApplication.Local reference.

    Read the article

  • running autobench (httperf)

    - by Matthew
    So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found I think it's a perl script but if I run perl autobench, it says: root@example:/tmp/autobench-2.1.2# perl autobench Autobench configuration file not found - installing new copy in /root/.autobench.conf cp: cannot stat `/etc/autobench.conf': No such file or directory Installation complete - please rerun autobench Even if I run it again it says the same thing.

    Read the article

  • Payables Master Generic Datafix (MGD) Now Checks For Even More EBTax Corruption!!

    - by MargaretW
    The Payables MGD is a vital diagnostic that all R12/12.1 customers need to run regularly to check the data integrity of their Payables system. This script does not make any changes to your system, so it’s risk free and it produces a HTML formatted output showing which data corruption issues have been detected and provides the Doc ID’s that will be needed to fix them. This MGD diagnostic (version 120.92 and above) is even better than it used to be as it now checks for 11 new EBTax corruption signatures that Support was seeing on a consistent basis. These lengthy Service Requests could have been avoided with one run of the MGD which tells you right away if you have data corruption. It’s the first thing our Payables support engineers will have you run when you log an SR so why not be one step ahead? The new EBTax signatures that were included in this latest update to the MGD are pulled from the following common solutions documents: R12 E-Business Tax/Payables Data-Fixes: Cause and action to handle ZX_LINES_SUMMARY_U1 issue Doc ID 1152123.1 EB-Tax Data Corruption Issues & Recommended Solutions Doc ID 1316316.1 The specific issues that are now screened are detailed below: 1. TAXABLE_BASIS_FORMULA and MANUALLY_ENTERED_FLAG mismatch 2. ESTABLISHMENT_ID mismatch 3. TRX_NUMBER mismatch 4. TAX_RATE mismatch 5. Currency Conversion related columns mismatch in Migrated Invoices 6. HISTORICAL_FLAG and RECORD_TYPE_CODE mismatch 7. ADJUSTED_DOC_TRX_LEVEL_TYPE is NULL or APPLIED_FROM_TRX_LEVEL_TYPE is NULL 8. Missing Reversal Tax Distributions For Tax Distributions 9. Tax Lines for discarded or cancelled Transaction Lines are not marked as cancelled 10. Error AP_ERR_TAX_DIST_SYNC 11. AP_UNFROZEN_DIST_EXIST/Unfrozen Tax Distributions exist for this invoice Get Proactive – Check your system for these common EBTax issues and fix the data before it causes a problem. Access the MGD note and watch the video that explains how it works here - R12: Master GDF Diagnostic to Validate Data Related to Invoices, Payments, Accounting, Suppliers and EBTax [VIDEO] Doc ID 1360390.1

    Read the article

  • How to show users the reason for a message being bounced or rejected by Postfix?

    - by Ross Bearman
    A user would like to be able to view a web page showing any emails that a Postfix server has either been unable to send, or unable to receive. For example if the user was supposed to receive an email from a third party but it hasn't arrived, they'd be able to check the web page and see a list of emails rejected by Postfix, along with a clear reason as to why. I've been unable to find an existing application that offers this functionality. Does anyone know of any, or is the best way forward to write a script that parses the log and display the results?

    Read the article

  • Migrating to ssh key authentication; implications of adding sbin's to users $PATH

    - by ancillary
    I'm in the process of migrating to key's for authentication on my CentOS boxes. I have it all set up and working, but was a bit taken aback when I noticed service (and other things) didn't work the way I was accustomed to. Even after su'ing to root, still had to call the full path for it to work (which I assume to be expected/normal behavior). I also assume this is because there are different $PATH's for root (what I was using and am used to) and the newly created, key-using user. Specifically, I noticed the sbin's of the world missing from the user path. If I were to add those paths (/sbin/,/usr/sbin/,/usr/local/sbin) to a profile.d .sh script for this new key-loving user, would: I be opening up the system in ways I shouldn't be? I be doing something I needn't do save for reasons of laziness? I create other potential problems? Thanks.

    Read the article

  • which command run in cron returns nothing

    - by Zárate
    Hi there, I've written a little utility in haXe + Neko that needs to execute some GIT commands. To avoid hardcoding the path to the GIT executable I'd like to use the which command to find out where it is. Everything works as expected when running manually from the console, but not when the the app runs on a cron job. I'm aware of the restricted environment (here or here) when you run a script using cron, but still surprised this doesn't work: /usr/bin/which git >> /home/user/git.txt The text file is created but the content is empty. Again, when run from the console it works as expected. Any ideas? I'm running OS X Leopard, if that helps. Thanks : ) Juan

    Read the article

  • Sending Emails from existing SMTP host from Ubuntu Server

    - by ezgoodnight
    I feel like my problem is very simple, but I've been trying for quite some time and haven't cracked it. You experienced server guys will probably laugh at this, but I'm finally at the point I need help or I'll never get anywhere. I have a little box running 12.04 LTS and I've wanted to script some status checks and have the server send me an email and schedule this with cron. I basically want a command line mail client that I can set up as easily as Thunderbird to send through my existing SMTP through the command line. Something that can easily be rolled into my bash scripts. I already have a remote host handling our email, SMTP, MTA, all that garbage. I don't particularly want to set up a relay just to send email when I have one that everyone else in the company already uses. I've tried, but there are too many aspects I don't understand AND I don't see why I should set up something local when we already pay for a remote host to do these things. If I absolutely have to set up sendmail or postfix, then so be it, but I'd appreciate a simple alternative. I'm open to practically anything at this point.

    Read the article

  • Failover NIC with Windows 7 running Apache Server

    - by Benjamin Jones
    I have a Apache Server running on a Windows 7 Desktop for a internal website (Intranet). I am trying to figure out how I can make this system as failover safe as possible. One thing I want to do is use the 2nd NIC card that is currently disable. I only have one gateway address to my router. I do not have any computer running a Server O/S. If NIC1 crashes by chance(just humor me!), how could NIC2's IP take on the host name (I have mapped in host file) . I don't really care if the static IP of NIC2 changes seeing that the host name is the only thing I need. Could I add two IP's to the host file, with same host email and maybe create some script that will run IF NIC 1 fails? OR is there any failover software that will do this for me?

    Read the article

  • SQLite DB borked when opened on a different machine

    - by pruefsumme
    Hello, I'm using SQLite to store some data. The primary database is on a NAS (Debian Lenny, 2.6.15, armv4l) since the NAS runs a script which updates the data every day. A typical "select * from tableX" looks like this: 2010-12-28|20|62.09|25170.0 2010-12-28|21|49.28|23305.7 2010-12-28|22|48.51|22051.1 2010-12-28|23|47.17|21809.9 When I copy the DB to my main computer (Mac OS X) and run the same SQL query, the output is: 2010-12-28|20|1.08115035175016e-160|25170.0 2010-12-28|21|2.39343503830763e-259|-9.25596535779558e+61 2010-12-28|22|-1.02951149572792e-86|1.90359837597183e+185 2010-12-28|23|-1.10707273937033e-234|-2.35343828462275e-185 The 3rd and 4th column have the type REAL. Interesting fact: When the numbers are integer (i.e. they end with ".0"), there is no difference between the two databases. In all other cases, the differences are ... hm ... surprising? I can't seem to find a pattern. If someone's got a clue - please share!

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >