Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 811/1381 | < Previous Page | 807 808 809 810 811 812 813 814 815 816 817 818  | Next Page >

  • Importing GPG Key

    - by Bodo
    I have problems importing my GPG-Keys into my new installation of debian. I exportet the private-key a few years ago. Now I am trying to get everything running under a new debian. I tried to do gpg --allow-secret-key-import --import private-key.asc But I only get this: gpg: Keine gültigen OpenPGP-Daten gefunden. gpg: Anzahl insgesamt bearbeiteter Schlüssel: 0 which translates to: gpg: No valid OpenPGP-Data found gpg: Number of processed Keys : 0 The file looks correct and starts with --BEGIN PGP PRIVATE KEY BLOCK----- Version: GnuPG v1.4.9 (GNU/Linux) and ends with -----END PGP PRIVATE KEY BLOCK----- what could be wrong?

    Read the article

  • Exchange Transport Service Started but not working

    - by Philippe
    Good day, Here is the problem : We are hosting a Microsoft Exchange Server. Everything working fine until recently, where the mail transport seems to go wrong. We almost have to restart the service every morning. The thing is that the transport service is started, but the mail are not delivered to the users and senders to our server get a delayed delivery notification. When we restart the service, all the mail is then delivered to the users and we're good to go for a day or two. Things I've noticed : The store service is growing to around 6 Gb of used RAM, and the w3wp.exe service is hanging around 700mb RAM. Is there a way to schedule a restart of the transport role every 4 hours or something while I'm solving the issue so I don't have to worry when I leave for the week-end? And most of all...does anyone have any idea on how to solve this issue? Thanks, Philippe

    Read the article

  • Adjusting the column height of a Word 2007 Mail Merge on every page?

    - by leeand00
    I've been doing mail merges lately and we use labels that aren't listed in the default MS Word settings: I tried measuring them out and here is what I got: Despite measuring them they don't seem to fit, and I always end up having to adjust the heights so that they print correctly. When I do this and I have 24 pages or so of labels I have to adjust each page individually and it all gets a bit annoying. So I was wondering if anyone had the proper measurements (since mine didn't work) or if maybe someone knew how to adjust the height of the columns of every page in exactly the same way to avoid using more labels.

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • RDMA architecture - do you need adapters on both ends?

    - by Bobb
    I know Linux can use RDMA NICs like Solarflare... I just found Intel has something like that NetEffect cards. But Intel is talking all about clusters.. Can someone please explain. If I want low-latency networking and install RDMA NIC on my server. Is there limitation on where the cable can go? Is there a specific device expected on the other end? Is it special RDMA switch or RDMA adapter before switch or what? Why is this cluster talk? What if I want a single server with Windows (I can install HPC Windows or Windows 2008 R2)?

    Read the article

  • configure pppoe on wrt54gl(openwrt)

    - by sunny
    Hi, I have home DSL connection from my ISP with a beetel modem at my end. I want to make my network wireless, so i bought a linksys wrt54gl and installed open-wrt on it. My ISP provides dynamic IP's with me connecting via pppoe. My question is how do I configure openwrt to work with this. Please suggest what option should I go with, or any other you recommend. DSL cable--Beetel Modem-- Wrt54gl in bridge mode with beetel modem doing the pppoe and having a DHCP server. DSL cable--Wrt54gl with wrt54gl handling the pppoe. Is option 2 possible, can I have a setup without bringing the beetel modem in the picture at all?

    Read the article

  • radius traffic accounting - what attributes do I use for traffic (and how)

    - by Mark Regensberg
    we are building a web front end for a internet access token management system that uses radius (freeradius) queried from a captive portal. Reason for building this part is the integration into the accounting and billing platform that operates behind the scenes (all other parts are currently available open source software) The structure is fairly standard, and setting up the basic bits were easy enough (authentication, traffic updates from the captive portal, account expiry date/times) - but I seem to have run out of ability when it comes to limiting an account by traffic consumed. So we can: set up usernames / passwords set expiry dates/times for a given user see the traffic for that user being accurately updated in RADACCT But we can't figure out the correct way/attribute to expire a user when they have consumed X octets of traffic. What attributes are used, or - maybe more accurately - what would be the correct way to use these attributes to limit an account to a certain volume of traffic? Any links to documentation appreciated - freeradius documentation doesn't seem to address the issue directly, or I'm looking in the wrong place... --mark

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • FTP transfer hangs for random files

    - by hoffmandirt
    I've been stuck on this FTP issue for a while now. I have IIS 7 setup with an IIS 6 FTP server running on a Windows Server 2008 box. The problem I am running into is that I can't download certain files from the FTP server, even though I uploaded those files to the FTP server. The connection times out after 120 seconds. I have used Wireshark and checked the log files. The only message I see is the timeout message. The first thing that came to my mind was permission issues, however I have probably tried every combination of permissions that I can think of, with the end goal of getting the permissions to be the same for the files that work and the files that do not work. With the list of files I have now, I can download the zip, war, and msi files, but not the txt or sql files. It almost seems like a binary thing, but I've changed my transfer mode on the FTP client and also toggled the Active/Passive options around.

    Read the article

  • PCI compliance - Setting BIND to no recursion, cURL can't access external sites

    - by Exit
    I was running a PCI scan and was following direction to change the BIND options from: // recursion no; allow-recursion { trusted;}; allow-notify { trusted;}; allow-transfer { trusted;}; to: recursion no; allow-recursion { none;}; allow-notify { trusted;}; allow-transfer { none;}; The end result was that cURL operations stopped being able to access external sites. I realize that not everything will be 100% for PCI compliance, but can someone explain if there is a way to balance this for both PCI compliance and function?

    Read the article

  • Why does cd print when run in command substitution?

    - by reasgt
    If I use the 'cd' BASH built-in in a command substitution, it prints extra stuff to stdout, but only when piped to, eg., less. $ echo `cd .` # The output is a single newline, appended by echo. $ echo `cd .` | less # less displays: ESC]2;my.hostname.com - tmp/testenv^G (END) What's going on there? This behavior isn't documented in the bash man page for cd. Obviously, running just 'cd' in a command substitution is silly, but something like NEWDIR=`cd mypath; pwd` could be useful. I solved this by instead using NEWVAR=`cd mypath > /dev/null 2>&1; pwd` but I still want to know what's going on. Bash Version: GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. Distro: Scientific Linux SL release 5.5 (Boron)

    Read the article

  • Apache Options -Indexes give me 404 instead of 403, why?

    - by netmano
    I have an Apache/2.2.21 (Debian) webserver, which I disabled directory listing with Options -Indexes but now I got 404 error for a directory, but I think I should get a 403. I have no idea why I get 404, rather than 403. What should I check? I have disabled autoindex module, after it I got a 404 for every URL that request a directory listing (eg: www.somesite.com/dir ). How can I get a 403 for this. (The dir does exist) As a try I also put Options -Index in the end of main config file (apache2.conf).

    Read the article

  • Running evrouter at boot with init.d, or after xserver starts

    - by J V
    I'm using evrouter to set up mouse button binds, and init.d to start it. My init.d file: #!/bin/bash #Simple init.d script to run evrouter ### BEGIN INIT INFO # Provides: evrouter # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Set evrouter bindings # Description: Set evrouter bindings at boot time. ### END INIT INFO config="/opt/hacks/evrouterrc" case "$1" in start|restart|reload|force-reload) evrouter -c "$config" /dev/input/event* ;; stop) echo "Evrouter is not a daemon, change settings file at '$config' and restart" ;; *) echo "Usage: $0 start" >&2 exit 3 ;; esac evrouter however complains that: evrouter: could not open display "". If evrouter requires xserver to be up, how do I get init to wait until after xserver starts to run this script? If xserver restarts will this script run automatically? Running this with sudo services evrouter start still results in this error, can init.d scripts not tell where my display is? (Not exactly familiar with init, runlevels, etc)

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • Ubuntu Upstart script hangs on start and stop

    - by sbwoodside
    I have an upstart script that will start a custom jetty server. When I do sudo start [myservice] nothing happens. Subsequently, sudo status [myservice] show it as: [myservice] start/killed, process 3586. Here's the script in /etc/init/[myservice].conf: description "[description]" author "[my name and email]" start on runlevel [2345] stop on runlevel [016] respawn expect fork script sudo -u www-data /path/to/grafserv-start.sh >> /tmp/upstart.log 2>&1 end-script And here is grafserv-start.sh: #!/bin/bash /usr/bin/java -Djetty.port=3070 -jar /path/to/grafserv/trunk/start.jar echo "Done starting GrafServ" I've tried redirecting the output of the script command to a tmp logfile, but that file is never created. When I start it, I just get a hang, until I ^C. Also, I tried running it with strace but that gave me a lot of stuff about sockets.

    Read the article

  • Options for gaming remotely on a LAN?

    - by Schwern
    I have a Windows 7 desktop for gaming, a big bulky tower with a nice graphics card. I'd like to sit out on my porch rather than inside while the weather is nice and play games. I have a high end Macbook Pro. What are my options? I figure either remote desktop over the LAN on the Macbook or maybe wireless video, keyboard and mouse. Something so I don't have to physically move the PC. The sort of games range from things like Skyrim to SW:TOR to Torchlight 2. What are my options? They have to do a better than running Boot Camp on the Macbook (MacBookPro8,1 i7 2.7 Ghz but Intel Graphics 3000). I realize there's a lot of issues involved in running a game over a remote desktop with a decent frame rate, I'm interested in a practical answer with real experience behind them. Ideally something that works on a OS X so I don't have to reboot into Windows.

    Read the article

  • Msys cd .. command takes me to home directory instead of parent

    - by Adrian
    I'm using Msys on Windows 7 with what I believe to be a Bash shell. I want to navigate the following directory structure: Drive (M:) +--- Coding +--- CPP +--- projects +--- other_folder_1 +--- other_folder_2 My fstab file contains the following line: M:/Coding/CPP/projects/ /home/Adrian/ ... which makes the projects folder my starting directory when opening the shell. Unfortunately when I try to cd .. out of projects, I end up in /home instead of CPP. I imagine this might be related to what I did in the fstab file. Is there any way for me to retain the projects folder as my starting directory while being able to cd into its parent directories?

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Automatic time tracking with central server, web reports

    - by user124209
    I need a software for automatic time tracking on Windows. With the following features: It should record time spent using the computer each day. Start time and end time. It should record what programs the employee used and total time for that program for specified period of time. It must have a centralized server that collects and stores all data. It could be a cloud server outside of a company network. It must have a web interface for viewing the monthly reports (the last but the most important requirement!). A nice feature to have would be an automatic generation of timesheets and Mac OS X support. I am looking to use it for a small team, this is not for personal use. Does anybody knows about software with these features?

    Read the article

  • MySQL query (over SSL) fails in IIS 7 using default AppPool identity

    - by Jon Tackabury
    I am trying to run a website locally in Windows 7 under IIS 7. I have the AppPool configured to use "Classic" mode, but connecting to a MySQL DB that requires SSL fails. If I change the identity to my user account it works perfectly. It fails when using the default "ApplicationPoolIdentity" account. Is there something I'm missing somewhere? Why would running a MySQL query over SSL fail for certain user accounts? Update: This is the exception that the MySQL Connector is throwing: "Reading from the stream has failed. Attempted to read past the end of the stream."

    Read the article

  • Using Zebra LP 2844-Z over the network

    - by Jason Kealey
    Hello, I am looking for a network-based label printer. I am looking at this Zebra LP-2844-Z printer. Unfortunately, it does not come with a network interface like the lower-end LP 2824 Plus. The ZebraNet 10/100 Print Server is both expensive for what it does (~$600) and only seems to support wireless networking, not wired. I prefer wired for reliability. Questions: Can I use a cheaper off-the-shelf print server to turn the LP-2844Z to a network printer. Would I get any trouble communicating with the printer via its own programming language or via OPOS? (instead of the Windows driver) Are cheaper print servers reliable? Would I be better to get another printer model that has it built in directly to avoid having issues due to the print server? What other printer would you recommend?

    Read the article

  • Getting around the lack of GPT support with CentOS 5.4

    - by sxanness
    Here is my issue and I hoping that there is someone out there that has an answer so I don't end up stuck here at my co-location all day. Last night I came here and upgraded a server (Dell 2970) to have four 1TB Hard Drives in RAID 5 which leaves a 3TB block. I tried to partition this but keep getting an error that GPT is not supported so I found a site online telling me I need to run the dd command and right random data to /dev/sda. This is great (if it works) but taking forever. I have two more machines to upgrade today and not a chair in site. Does anyone have advice on how I can avoid this issue beforehand? Thank You for your advice and support.

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • How can I make it difficult to install a new operating system on a certain computer?

    - by D W
    I want to host a website on a desktop computer running Ubuntu with a Windows virtual machine. I will give away the computer in exchange for a number of months of remote web hosting. I want to add some kind of lock (hardware or otherwise) so that the end users will have difficulty just reinstalling Windows and using the machine as they want, in contradiction to the contract. Ideally, I'd want the machine to die if reinstallation of the OS is attempted. It doesn't have to be completely insurmountable, but it has to be difficult enough to prevent casual reinstallation. Perhaps on bootup the system can check whether certain files exist on the computer and refuse to boot if they do not. I don't know if this is possible, but maybe BIOS is password protected, and searches for files before boot up. The files it looks for could be date sensitive, i.e. require remote replacement on a schedule.

    Read the article

  • AuthResend query string being appended to URL

    - by Alastair Pitts
    One of our clients is having an issue where POSTBACK seems to be broken when they connect to our Sharepoint application. When they navigate to a URL, an erroneous query string gets appended to the URL, so the end of the URL becomes: .../default.aspx&AuthResend1908BC2350124b5095AB75012FA405BA this isn't something that appears on any other clients computers or ours internally. This is the only difference and it seems to be breaking their pages. I had a quick Google and it seems that it's to do with a Microsoft ISA server, but I have no experience with that. Is this a bug or setting on their ISA server?

    Read the article

< Previous Page | 807 808 809 810 811 812 813 814 815 816 817 818  | Next Page >