Search Results

Search found 51910 results on 2077 pages for 'run level'.

Page 275/2077 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Using WebSphere CloudBurst with PowerVM to AIX virtualization over a cloud

    - by ADD Geek
    hi there we are studying the virtualization option to reduce our datacenter cost, and this research was assigned to me. we looked into alternatives and we almost reached a conclusion that PowerVM is the only option to virtualize pSeries servers. we found no signs of cloud support explicitly mentioned in any document, however there was the mention of CloudBurst. from the videos we watched and the documents we read, it seems that CloudBurst is more oriented towards Application Servers (WebSphere Software). but our environment is not relying only on WebSphere. we have some banking applications, Oracle Databases and MQ/Broaker. the question is: 1- can we virtualize the existing applications (all running AIX) on a cloud running on top of some of the existing servers? (given that we do the sizing properly) 2- is PowerVM to run on top of CloudBurst? 3- if the above solution is applicable, is this some sort of HA solution (since the VM will run on top of multiple physical boxes, while the same physical box will run multiple live images) thanks for your help

    Read the article

  • Scheduled Jobs during hours of autumn time change

    - by NealWalters
    I'm wondering how other people deal with this scenario. What if you have a job scheduled to run at 1:30 am. In the autumn, when time changes, the hour of 1:00:00 to 1:59:59 repeats itself and so that job would run twice. Could be Windows Task Scheduler, SQL Agent or any other scheduling tool. Most of these tools seem to be based on machine time, not UTC time. If I told it to run the job at UTC time each night, then I wouldn't have the duplicate hour issue.

    Read the article

  • Running Mathimatica-5 remotely

    - by oxinabox.ucc.asn.au
    Ok, I have Mathmatica 5 - a powerful CAS. I have a cheap netbook, wich not olny is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose linuxes, but one of which is windows server 2008) Mostly over SSH but other protocols can be arraged for some, i'm sure. (I might even be able to remote desktop the windows server 2008) So I'ld like to install Mathmatica onto one of these machine and then run it remotely. Either from the command line via putty or via some other method. I glanced through the mathmatical documentaion and read soemthing about using some MathLink program, wich linkes the front end istalled on my computer to a remote kernal. Anyone have any expirience with this? I'm not sure if this belongs here or in SuperUser.

    Read the article

  • Kerberos: Running an app with a parameter using krenew

    - by Mihai Todor
    I need to run an application with krenew, but the application also needs to receive a parameter via command line and I need to send its output to a file. From the documentation, it looks like this should do the trick: krenew -t -- sh -c 'compute-job > /afs/local/data/output' but, unfortunately, when I run the command below: krenew -s -- sh -c './my_app config.xml > results/test.txt &' the application just dies after a while and I can see from the output of ps aux that krenew is not running along with my_app. I am not sure what the parameter -t does, and as far as I can see, if I run krenew -s ./my_app, it works properly. I hope someone can clarify this.

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • SQL Server Agents jobs and turning off the server

    - by Tim Joseph
    I'm really new to SQL Agent jobs, but I am attempting to build up a maintainance regime for a server that will be turned off and on again at unknown intervals. It may run without being shutdown for a month, or it might only be turned on 9-5... we don't know and the client can't tell us because they don't know. So what I'm wondering is, what do I need to do to get SQL Server to run monthly and daily jobs either when they are due, or if the due date is missed, get them to be run when the server is next powered on. I could come up with a mish-mash of periodic jobs and 'on-power-up' jobs, but if there is something more elegant that would be wonderful. Obviously I'll need to ensure the SQL Server Agent is configure to start when the computer is powered up, but what else?

    Read the article

  • virtual machines and cryptography

    - by Unknown
    I suspect I'm a bit offtopic with the site mission, but it seems me more fitting for the question than stackoverflow i'm in preparing to create a vm with sensible data (personal use, it will be a web+mail+... appliance of sorts), i'd like to protect the data even with cryptography; the final choice have to be cross-platform for the host basically, I have to choose between guest system-level cryptography (say, dm-crypt or similar) or host level cryptography with truecrypt. do you think that the "truecrypt-volume contained virtualized disks" approach will hit the i/o performance of the vm badly (and therefore dm-crypt like approaches into the vm would be better), or is it doable? I'd like to protect all the guest data, not only my personal data, to be able to suspend the vm freely without worrying for the swap partition, etc

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Routing and Remote Access Service won't start after full disk

    - by NKCSS
    The HDD of the server was out of disk space, and after a reboot, RRAS won't start anymore on my 2008 R2 server. Error Details: Log Name: System Source: RemoteAccess Date: 2/5/2012 9:39:52 PM Event ID: 20153 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Windows14111.<snip> Description: The currently configured accounting provider failed to load and initialize successfully. The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="RemoteAccess" /> <EventID Qualifiers="0">20153</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-02-05T20:39:52.000Z" /> <EventRecordID>12148869</EventRecordID> <Channel>System</Channel> <Computer>Windows14111.<snip></Computer> <Security /> </System> <EventData> <Data>The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error.</Data> <Binary>2C030000</Binary> </EventData> </Event> I think it has something to do with a corrupt config file, but I am unsure of what to do. I Removed the RRAS role, rebooted, and re-added, but it keeps failing with the same error. Thanks in advance. [UPDATE] If i set the accounting provider from 'Windows' to '' the service starts but VPN won't work. Any ideas how this can be repaired?

    Read the article

  • Running rsync on network connect

    - by user40495
    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Running rsync on network connect

    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Simple queries occasionally running very slowly

    - by Johan
    I have some very simple queries that occasionally run very slowly. The table viewed_sites has about 10 - 20 rows. Running EXPLAIN ANALYZE always gives a runtime of less than 3 milliseconds. When the query is run automatically (every 10 seconds) it occasionally takes over a second to run. The query: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') The table: CREATE TABLE viewed_sites ( site_id character varying(4) NOT NULL, last_viewed timestamp with time zone DEFAULT now() NOT NULL ); The (occasional) log result: 2010-05-24 15:47:55 UTC LOG: duration: 1044.632 ms statement: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') It's a horribly vague question, but what could be causing this? I suppose it comes down to CPU, RAM, HDD or some combination of the above. Postgresql 8.3, Ubuntu 8.04 Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz 2 GiB RAM

    Read the article

  • launchctl - use rvm instead of system Ruby in executed scripts?

    - by Stefan Kendall
    I have a launchctl job I define as such: <key>ProgramArguments</key> <array> <string>/bin/sh</string> <string>-c</string> <string>~/projects/script.sh</string> </array> When I run script.sh manually, the script works fine, as it uses the currently configured rvm version of ruby. When I run this through launchctl, the system version of Ruby is used, which breaks the script. How can I get this script to run with the right version of ruby available?

    Read the article

  • Weather Logging Software on Windows Home Server

    - by Cruiser
    I'm looking for some weather logging software that I can run as a Windows Home Server add-in, or as a service on my Home Server, so I don't need to log into my Home Server to log weather data. I have an Oregon Scientific WMR918 weather station, and an HP MediaSmart EX485 Windows Home Server. The two are currently connected through a serial bluetooth adapter, but that shouldn't matter as the computer sees it basically as a serial device. I'm currently using Cumulus to log data and upload to Weather Underground, but it is a regular windows application, so I need to remain logged into my Home Server by RDP in order to run the software (I disconnect, but don't log off so the session remains open). Ideally I would like something to run as a service or WHS add-in, so that it runs all the time without logging in, can log data from my WMR918, and can upload to Weather Underground. Thanks!

    Read the article

  • How do i get a more recent version of Java on my Mac than is showing up in software update?

    - by Cd Lolly
    I need at least Java 1.6 to run a program that someone else in my lab wrote On the Java website it tells me to update Java via apple's software update function, i've run this a few times but it only got up to Java 1.5.0_24 and it now says no more updates are available for my computer Is there another way to update Java on a Mac? Is my operating system maybe to old for Java 1.6? i'm not sure what i'm running exactly, and i can't find a list of what mac operating systems run what versions of Java because the java site just suggests using Mac's software update.

    Read the article

  • Windows Server Task Scheduler: Running scheduled executable fail-safe?

    - by Mikael Koskinen
    I have an executable which I've scheduled to run once in every five minutes (using Window's built-in Task Scheduler). It's crucial that this executable is run because it updates few time critical files. But how can I react if the virtual server running the executable goes down? At no point there shouldn't be more than 15 minutes break between the runs. As I'm using Windows Server and its Task Scheduler, I wonder is it possible to create some kind of a cluster which automatically handles the situation? The problem is that the server in question is running on Windows Azure and I don't think I can create actual clusters using the virtual machines. If the problem can be solved using a 3rd party tool, that's OK too. To generalize the question a little bit: How to make sure that an executable is run once in every 5 minutes, even if there might be server failures?

    Read the article

  • Restricting SSRS subscriptions to shared schedules only

    - by Matt Frear
    Hi all I'm reasonably new to SQL Server Reporting Services and Report Manager, and completely new to SSRS's Subscriptions. We're running SSRS 2008. Out of the box it seems that a user with the Browser role can create a Subscription to a report and schedule it to run at any time they choose. As an admin I have setup a schedule called "Overnight reports" and have it run every night from 1am. I would like it so that when a regular user creates their Subscription they can only use one of my shared schedules so that their subscription will only run overnight. Is this possible? Thanks -Matt

    Read the article

  • Why does the java -Xmx not working?

    - by Zenofo
    In my Ubuntu 11.10 VPS, Before I run the jar file: # free -m total used free shared buffers cached Mem: 256 5 250 0 0 0 -/+ buffers/cache: 5 250 Swap: 0 0 0 Run a jar file that limited to maximum of 32M memory: java -Xms8m -Xmx32m -jar ./my.jar Now the memory state as follows: # free -m total used free shared buffers cached Mem: 256 155 100 0 0 0 -/+ buffers/cache: 155 100 Swap: 0 0 0 This jar occupied 150M memory. And I can't run any other java command: # java -version Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. # java -Xmx8m -version Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I want to know why the -Xmx parameter does not take effect? How can I limit the jar file using the memory?

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Systemd can't start script?

    - by TokyoMEWS
    I have a BASH-script I want to run on start up. My system is running systemd so I created a .service file with whith what I think is the neccessary information: [Unit] Description=My Script After=network.target [Service] ExecStart=/home/myscript.sh [Install] WantedBy=multi-user.target I used systemctl enable to 'register' it an rebooted. On boot I was told my script would be executed, but I could neither see any of the messages ECHO should display on screen nor did it write something to a file, according to what I had written in the script. Additionally, It does not start the application it's supposed to start. Systemctl status tells me that the script has run and exited successfully. Still, the script has no effect. If I run the script from a shell it works perfectly fine. Do any of you know what could be my problem?

    Read the article

  • AMD Processors and the Windows Phone 8 Emulator

    - by Aj Patel
    I would madly appreciate it if anyone in this community would help me with my question. The background is that I want to develop Windows Phone 8 applications but both of my current computer processors do not have Hardware Virtualization & Second Level Address Translation that are needed to run the Emulator. I have my eyes on an AMD computer g7-2243us (I like it because it has 1600x900 screen res). I looked up this Link that shows that this computer's AMD processor (Next Gen AMD Quad-Core A8-4500M Accelerated 1.9GHz up to 2.8GHz 4MB L2 Cache Processor) supports AMD-V Hardware Virtualization. So, will this computer be able to run the emulator? Thank you so much for your answers. I'm pretty sure it will run the emulator, but I just want to make sure before spending $400. Thank You all So Much.

    Read the article

  • Ubuntu: Take actions when system temperature gets too high

    - by Josh
    One of the CPU fans on my Compaq Presario laptop running Ubuntu 9.10 seems to have bit the dust. The fan is deep within the case and I intend to replace the laptop in the next 6 months so it's not worth replacing it. I have the laptop on a cooling pad and most of the time the system is fine, CPU temps around 90°-110°F. Occasionally, however, I'm seeing random lockups which I believe is due to the system overheating. How can I configure the system to: Lower the CPU speed when the temperature reaches a certain level? (I.E. 110°F) Shutdown the system when the tempature reaches a critical level? (And what would that be? 130°?)

    Read the article

  • IF commands in a batch file

    - by Rossaluss
    I'm writing a small batch file to replace users' themes and charts in Office and I have the below batch file that works just fine. cd c:\documents and settings\%username%\application data\microsoft\templates echo Y|rmdir charts /s mkdir charts echo Y|del "c:\documents and settings\%username%\application data\microsoft\templates\document themes\*.*" net use o: \\servername\sms copy "o:\ppt themes\charts\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\charts" copy "o:\ppt themes\Document Themes\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\document themes" c: net use o: /delete Now what I want is the above to only run if it hasn't run before as we'll be pushing this out to all users for around 2 weeks to catch people that aren't in every day. Is there any way to begin the command with something to look for one of the new themes/charts already pushed down, and if it's present, then have it not run? Any help on this would be greatly appreciated as I'm pretty new to these batch files.

    Read the article

  • How can I make results of a formula values that can be filtered or use vlookup with Excel

    - by Burt
    I am having an issue in that I am using various formulas to move, split data, etc from various sources. The problem is when my final results post to the final destination that I want, I still need to either run advanced filters, or a vlookup with the results. I can’t do this because as an example if cell A1 shows a value of: A127 the actual cell content is: =RIGHT(A2,FIND(" ",A2&" ")-2) Everything I read said to copy and paste special values, but this doesn’t work for me as the idea is to have the formulas/macros run everything and eliminating cutting and pasting. In the case above I have a formula that pulls that info from a spreadsheet that is saved every week. Once it is pulled part of it is cut out in another column. I then need to run a vlookup on those results for data already contained on another tab.

    Read the article

  • /etc/profile.d and "ssh -t"

    - by petersohn
    I wanted to run a script on a remote machine. The simple solution is this: ssh remote1 some-script This works until the remote script doesn't want to connect to another remote machine (remote2) which requires interactive authentication, like tis one (remote2 is only reachable through remote1 in this case): ssh remote1 "ssh remote2 some-script" The solution for the problem is to use the -t option for ssh. ssh -t remote1 "ssh remote2 some-script" This works, but I get probems in case I use this (where some-script may execute further ssh commands): ssh -t remote1 some-script I found that some environment variables are not set which are set when I don't use the -t option. These envrionment variables are set in scripts from /etc/profile.d. I guess that these scripts are not run for some reason if using the -t option, but are run if I don't use it. What's the reason of this? Is there any way to work around it? I am using SUSE linux (version 10).

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >