Search Results

Search found 54052 results on 2163 pages for 'run configuration'.

Page 253/2163 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • Problem running mysql client, cannot connect to mysql server

    - by ehsanul
    Edit3: Thanks for the help everyone. Sorry for wasting anybody's time, but it seems like a simple reboot solved it. I should've known better, but I just had the assumption that the "restart" solution is mostly valid just for MS Windows (no offense). I'll keep this in mind before I ask a question here again. I installed the mysql-client-5.0 and mysql-server-5.0 packages on Ubuntu 8.04, using sudo apt-get install. When I try to run the "mysql" command, I get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) To verify that mysql server is running, I tried this, and it does seem to be running, with the correct socket too: $ ps aux | grep mysql root 13388 0.0 0.0 1772 528 ? S 06:24 0:00 /bin/sh /usr/bin/mysqld_safe mysql 13553 0.0 1.4 127012 15332 ? Sl 06:25 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 13555 0.0 0.0 3008 696 ? S 06:25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld ehsanul 16910 0.0 0.0 3092 772 pts/4 R+ 07:17 0:00 grep mysql So I don't understand why I'm getting an error trying to connect to mysql server. Note that I'm completely new to mysql. Edit: As requested in comments, the exact command that is returning the error is simply "sudo mysql". And when I check netstats for active networks services, I do see an entry for port 3306, with Protocol: tcp, IP Source: 127.0.0.1, State: LISTEN Edit2: It appears as if the /var/run/mysqld/mysqld.sock socket doesn't exist (if I'm interpreting the following output correctly): $ ls -al /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql root 40 2009-08-06 06:36 . drwxr-xr-x 20 root root 860 2009-08-06 06:25 ..

    Read the article

  • How a batch file runs on a remote machine started by PSEXEC

    - by user38780
    I am having an issue running a Batch file on a remote machine suing PSEXEC. The file runs but does not run like it does when run through remote desktop. The batch runs a file which is a 32 bit application, which opens multiple 16bit applications, this should all run under one ntvdm.exe (In one Memory Space). Through remote desktop the batch file runs under the explorer process, and works correctly opening only one ntvdm.exe. Using PSEXEC the batch runs but not under the explorer process, a separate ntvdm.exe is open for each process. I found running the batch from explorer in PSEXEC works, but comes up with a "File Download - Security Warning" eg. psexec.exe" \compname -u username -p passowrd -s -d -i 0 explorer C:\Program.bat I want to be able to run the batch successfully without receiving warnings, it is a local warning and not a network share warning. Possible to recreate warning typing "explorer C:\windows\system32\cmd.exe" in Run I would like to know if anyone knows of a way to get PSEXEC to open the batch file to run as though it was started by explorer. Or a way of removing the local "File Download - Security Warning" Thanks

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

  • /etc/crontab or any user crontab is not being executed

    - by ian
    My server is CentOS 5. When I edit /etc/crontab or edit any user(including root) crontab via "crontab -e" command, it just adds "(system) RELOAD (/etc/crontab)" or "(admin) RELOAD (cron/admin)" in the log. No CMD in the /var/log/cron. Sample entry in /var/log/cron: Aug 10 10:21:33 localhost crontab[31688]: (root) BEGIN EDIT (root) Aug 10 10:21:42 localhost crontab[31688]: (root) REPLACE (root) Aug 10 10:21:42 localhost crontab[31688]: (root) END EDIT (root) Aug 10 10:22:01 localhost crond[2688]: (root) RELOAD (cron/root) Result of "service crond status": crond (pid 1345) is running... The command "cat /var/log/messages | grep cron" does not give anything. Contents of /etc/cron.allow: admin root Contents of /etc/crontab: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly * * * * * root run-parts /bin/date >> /data/date.txt Result of ps aux |grep cron: root 1345 0.0 0.1 5268 1204 ? Ss 11:43 0:00 crond Contents of admin's crontab: * * * * * /bin/date >> /data/date.txt Note that it's not only admin's crontab that's not running. All cron jobs are not running. Any ideas why they aren't running?

    Read the article

  • ephemeral vs EBS partitions

    - by hortitude
    I launched an EBS backed AMI with all the defaults. I noticed that it automicatlly had attached an ephemeral disk. I was just wondering if there was a good programtic way to know that this particular device is ephemeral vs some EBS volume I had decided to attach: ubuntu@-----:~$ df -ahT Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 ext4 7.9G 867M 6.7G 12% / proc proc 0 0 0 - /proc sysfs sysfs 0 0 0 - /sys none fusectl 0 0 0 - /sys/fs/fuse/connections none debugfs 0 0 0 - /sys/kernel/debug none securityfs 0 0 0 - /sys/kernel/security udev devtmpfs 1.9G 12K 1.9G 1% /dev devpts devpts 0 0 0 - /dev/pts tmpfs tmpfs 751M 172K 750M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 0 1.9G 0% /run/shm /dev/xvdb ext3 394G 199M 374G 1% /mnt ubuntu@-----:~$ mount /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/xvdb on /mnt type ext3 (rw,_netdev)

    Read the article

  • Starting nginx with systemctl fails, but running the command manually doesn't

    - by Ivan
    On Arch Linux, for some reason, when I try to start nginx with the command "systemctl start nginx", it fails, with this being the output of "systemctl status nginx": Loaded: loaded (/etc/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Wed 2013-10-30 16:22:17 EDT; 5s ago Process: 9835 ExecStop=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=126) Process: 3982 ExecStart=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 10967 ExecStartPre=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=126) Main PID: 3984 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nginx.service ...but when I run /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g "pid /run/nginx.pid; daemon on; master_process on;" and then /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g "pid /run/nginx.pid; daemon on; master_process on;" as root, all it does is return a warning, but works just fine: nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 Why is it doing that?

    Read the article

  • Upstart can't determine my process' pid

    - by sirlark
    I'm writing an upstart script for a small service I've written for my colleagues. My upstart job can start the service, but when it does it only outputs queryqueue start/running; note the lack of a pid as given for other services. #/etc/init/queryqueue.conf description "Query Queue Daemon" author "---" start on started mysql stop on stopping mysql expect fork env DEFAULTFILE=/etc/default/queryqueue umask 007 kill timeout 30 pre-start script #if [ -f "$DEFAULTFILE" ]; then # . "$DEFAULTFILE" #fi [ ! -f /var/run/queryqueue.sock ] || rm -rf /var/run/queryqueue.sock #exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script script #Originally this stanza didn't exist at all if [ -f "$DEFAULTFILE" ]; then . "$DEFAULTFILE" fi exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script post-start script for i in `seq 1 5` ; do [ -S /var/run/queryqueue.sock ] && exit 0 sleep 1 done exit 1 end script The service in question is a python script, which when run without error, forks using the code below right after checking command line options and basic environmental sanity, so I tell upstart to expect fork. pid = os.fork() if pid != 0: sys.exit(0) The script is executable, and has a python shebang. I can send the TERM signal to the process manually, and it quits gracefully. But running stop queryqueue claims queryqueue stop/waiting but the process is still alive and well. Also, it's logs indicate it never received the kill signal. I'm guessing this is because upstart doesn't know which pid it has. I've also tried expect daemon and leaving the expect clause out entirely, but there's no change in behaviour. How can I get upstart to determine the pid of the exec'd process

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • Ubuntu "No space left on device" for /home, df shows 100% full, ds shows much, much less

    - by Jon Cram
    On an Ubuntu 12.04 server, normal users can no longer create or add to files in /home, encountering a "No space left on device" error. The /home directory has a capacity of 1.7 terabytes and as far as I can tell is nowhere near full in terms of actual data stored or inodes used. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/md2 1.0T 18G 955G 2% / udev 7.7G 4.0K 7.7G 1% /dev tmpfs 3.1G 320K 3.1G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.7G 0 7.7G 0% /run/shm cgroup 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/md3 1.7T 1.7T 0 100% /home /dev/md1 496M 45M 426M 10% /boot /home indeed looks rather full. du -hs /home suggests otherwise: 1.4G /home There appears no inode issue - df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 67108864 75334 67033530 1% / udev 2013497 527 2012970 1% /dev tmpfs 2015816 440 2015376 1% /run none 2015816 2 2015814 1% /run/lock none 2015816 1 2015815 1% /run/shm cgroup 2015816 9 2015807 1% /sys/fs/cgroup /dev/md3 113909760 105981 113803779 1% /home /dev/md1 131072 239 130833 1% /boot I recently deleted a many gigabytes of application cache and log data from /home, however this was in the tens of gigabytes at best and nowhere near the capcity of /home. Update 1: du -hs --apparent-size /home 1.2G /home du -hs /home 1.4G /home What might be going on here?

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Windows 7 permissions and Samba domain

    - by Nimzo
    I'm the main desktop support in an office of mixed MacOS and WinXp machines, about 60. I'm new to Windows 7. Currently our XP users are admins on their own machines, and my boss is wanting us to get away from that now that we're going to Windows 7 (64bit). My boss is largely absent from my day-to-day work, so I'm here looking for help =) I have numerous unattended .cmd scripts that we run from a server share, unattended software installs. Some run at login, some have to be run manually. With the NetworkAdmin account logged on to the computer, I am able to run the .cmd files and install stuff just fine. With my test account logged on (Power User), I cannot run the .cmd file - I get an Access Denied. When I change my test account to an Admin on the machine, I still get access denied. However, the test account can simply double-click the .exe and install the software just fine, as admin. Power User can't install anything. How do I fix it so that any power user or admin on the machine can run anything as long as it's on our shared software drive?

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • how do you document your development process?

    - by David
    My current state is a mixture of spreadsheets, wikis, documents, and dated folders for my input/configuration and output files and bzr version control for code. I am relatively new to programming that requires this level of documentation, and I would like to find a better, more coherent approach. update (for clarity): My inputs are data used to generate configuration files with parameter values and my outputs are analyses of model predictions. I would really like to have an approach that allows me to associate particular configuration(s) with particular outputs, so that I can ask questions of my documentation such as "what causes over/under estimates?" or "what causes error 'X'"?

    Read the article

  • Often "running in low-graphics mode” with NVidia GeForce GT 430

    - by user42228
    Often (every fifth time perhaps) when I turn on my system, I'm greeted with the "running in low-graphics mode” window. Sometimes all I have to do is reboot and the system will start correctly, sometimes there can be a longer series of failed starts. If I go to the terminal and log in, I can run startx and my desktop will appear without problems. At the end of /var/log/Xorg.1.log I see this: [ 77.459] (EE) Screen(s) found, but none have a usable configuration. [ 77.459] Fatal server error: [ 77.459] no screens found [ 77.459] Please consult the The X.Org Foundation support at http://wiki.x.org Is the NVidia creating the appropriate XOrg configuration dynamically on every boot, or what could explain that the configuration error above only occurs sometimes?

    Read the article

  • ORA-4030 Troubleshooting

    - by [email protected]
    QUICKLINK: Note 399497.1 FAQ ORA-4030 Note 1088087.1 : ORA-4030 Diagnostic Tools [Video]   Have you observed an ORA-0430 error reported in your alert log? ORA-4030 errors are raised when memory or resources are requested from the Operating System and the Operating System is unable to provide the memory or resources.   The arguments included with the ORA-4030 are often important to narrowing down the problem. For more specifics on the ORA-4030 error and scenarios that lead to this problem, see Note 399497.1 FAQ ORA-4030.   Looking for the best way to diagnose? There are several available diagnostic tools (error tracing, 11g Diagnosibility, OCM, Process Memory Guides, RDA, OSW, diagnostic scripts) that collectively can prove powerful for identifying the cause of the ORA-4030.    Error Tracing   The ORA-4030 error usually occurs on the client workstation and for this reason, a trace file and alert log entry may not have been generated on the server side.  It may be necessary to add additional tracing events to get initial diagnostics on the problem. To setup tracing to trap the ORA-4030, on the server use the following in SQLPlus: alter system set events '4030 trace name heapdump level 536870917;name errorstack level 3';Once the error reoccurs with the event set, you can turn off  tracing using the following command in SQLPlus:alter system set events '4030 trace name context off; name context off';NOTE:   See more diagnostics information to collect in Note 399497.1  11g DiagnosibilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4030 occurs.  For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file may contain vital information about what led to the error condition.    Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Informationto Support[Video]  Oracle Configuration Manager (OCM) Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations. Oracle Configuration Manager Quick Start Guide Note 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    General Process Memory Guides   An ORA-4030 indicates a limit has been reached with respect to the Oracle process private memory allocation.    Each Operating System will handle memory allocations with Oracle slightly differently. Solaris     Note 163763.1Linux       Note 341782.1IBM AIX   Notes 166491.1 and 123754.1HP           Note 166490.1Windows Note 225349.1, Note 373602.1, Note 231159.1, Note 269495.1, Note 762031.1Generic    Note 169706.1   RDAThe RDA report will show more detailed information about the database and Server Configuration. Note 414966.1 RDA Documentation Index Download RDA -- refer to Note 314422.1 Remote Diagnostic Agent (RDA) 4 - Getting Started OS Watcher (OSW)This tool is designed to gather Operating System side statistics to compare with the findings from the database.  This is a key tool in cases where memory usage is higher than expected on the server while not experiencing ORA-4030 errors currently. Reference more details on setup and usage in Note 301137.1 OS Watcher User Guide Diagnostic Scripts   Refer to Note 1088087.1 : ORA-4030 Diagnostic Tools [Video] Common Causes/Solutions The ORA-4030 can occur for a variety of reasons.  Some common causes are:   * OS Memory limit reached such as physical memory and/or swap/virtual paging.   For instance, IBM AIX can experience ORA-4030 issues related to swap scenarios.  See Note 740603.1 10.2.0.4 not using large pages on AIX for more on that problem. Also reference Note 188149.1 for pointers on 10g and stack size issues.* OS limits reached (kernel or user shell limits) that limit overall, user level or process level memory * OS limit on PGA memory size due to SGA attach address           Reference: Note 1028623.6 SOLARIS How to Relocate the SGA* Oracle internal limit on functionality like PL/SQL varrays or bulk collections. ORA-4030 errors will include arguments like "pl/sql vc2" "pmucalm coll" "pmuccst: adt/re".  See Coding Pointers for pointers on application design to get around these issues* Application design causing limits to be reached* Bug - space leaks, heap leaks   ***For reference to the content in this blog, refer to Note.1088267.1 Master Note for Diagnosing ORA-4030

    Read the article

  • Monitoring your WCF Web Apis with AppFabric

    - by cibrax
    The other day, Ron Jacobs made public a template in the Visual Studio Gallery for enabling monitoring capabilities to any existing WCF Http service hosted in Windows AppFabric. I thought it would be a cool idea to reuse some of that for doing the same thing on the new WCF Web Http stack. Windows AppFabric provides a dashboard that you can use to dig into some metrics about the services usage, such as number of calls, errors or information about different events during a service call. Those events not only include information about the WCF pipeline, but also custom events that any developer can inject and make sense for troubleshooting issues.      This monitoring capabilities can be enabled on any specific IIS virtual directory by using the AppFabric configuration tool or adding the following configuration sections to your existing web app, <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <diagnostics etwProviderId="3e99c707-3503-4f33-a62d-2289dfa40d41"> <endToEndTracing propagateActivity="true" messageFlowTracing="true" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name=""> <etwTracking profileName="EndToEndMonitoring Tracking Profile" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>   <microsoft.applicationServer> <monitoring> <default enabled="true" connectionStringName="ApplicationServerMonitoringConnectionString" monitoringLevel="EndToEndMonitoring" /> </monitoring> </microsoft.applicationServer> Bad news is that none of the configuration above can be easily set on code by using the new configuration model for WCF Web stack.  A good thing is that you easily disable it in the configuration when you no longer need it, and also uses ETW, a general-purpose and high-speed tracing facility provided by the operating system (it’s part of the windows kernel). By adding that configuration section, AppFabric will start monitoring your service automatically and providing some basic event information about the service calls. You need some custom code for injecting custom events in the monitoring data. What I did here is to copy and refactor the “WCFUserEventProvider” class provided as sample in the Ron’s template to make it more TDD friendly when using IoC. I created a simple interface “ILogger” that any service (or resource) can use to inject custom events or monitoring information in the AppFabric database. public interface ILogger { bool WriteError(string name, string format, params object[] args); bool WriteWarning(string name, string format, params object[] args); bool WriteInformation(string name, string format, params object[] args); } The “WCFUserEventProvider” class implements this interface by making possible to send the events to the AppFabric monitoring database. The service or resource implementation can receive an “ILogger” as part of the constructor. [ServiceContract] [Export] public class OrderResource { IOrderRepository repository; ILogger logger;   [ImportingConstructor] public OrderResource(IOrderRepository repository, ILogger logger) { this.repository = repository; this.logger = logger; }   [WebGet(UriTemplate = "{id}")] public Order Get(string id, HttpResponseMessage response) { var order = this.repository.All.FirstOrDefault(o => o.OrderId == int.Parse(id, CultureInfo.InvariantCulture)); if (order == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent("Order not found"); }   this.logger.WriteInformation("Order Requested", "Order Id {0}", id);   return order; } } The example above uses “MEF” as IoC for injecting a repository and the logger implementation into the service. You can also see how the logger is used to write an information event in the monitoring database. The following image illustrates how the custom event is injected and the information becomes available for any user in the dashboard. An issue that you might run into and I hope the WCF and AppFabric teams fixed soon is that any WCF service that uses friendly URLs with ASP.NET routing does not get listed as a available service in the WCF services tab in the AppFabric console. The complete example is available to download from here.

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Access denied error when adding new server to existing SharePoint 2007 Farm

    - by Kelly Jones
    I recently got an error when adding a new SharePoint 2007 (SP2) server to our existing MOSS farm.  I had run the installation fine, and was walking through the SharePoint Configuration Wizard, when I got an error on step 2: “Resource retrieved id PostSetupConfigurationFailedEventLog is Configuration of SharePoint Products and Technologies failed.” I searched the net, but didn’t really find anything. I then remembered that I had forgotten to run the latest SharePoint updates (for us, the latest applied was December 2009).  I installed the WSS update, and then the MOSS update and both worked fine.  I then ran the Configuration wizard again and it worked without any errors.

    Read the article

  • Live boot from hard disk problem

    - by user172277
    I've installed Ubuntu Desktop 13.04 32bit. Next I configured /etc/grub.d/40_custom to boot live system from ubuntu.iso (also Desktop 13.04 32 bit) I used configuration: menuentry "Ubuntu 13.04 Desktop" { loopback loop /boot/ubuntu.iso linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=/boot/ubuntu.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } and it works OK. Later I made some changes on my installed ubuntu. I made some configuration, installed additional packages and so on. After that I made backup using remastersys tool. Remastersys gave me new ISO file. So I wanted to use it. And here was first problem. Remastersys creates only initrd.gz file. So I changed the grub configuration form: initrd (loop)/casper/initrd.lz to: initrd (loop)/casper/initrd.gz But after that, when I reboot my system I get error: /init: line 3: can't open /dev/sr0: No medium found Any ideas how to fix it? Best Regards, Bartosz

    Read the article

  • Is your Xcode4 stable?

    - by Eonil
    I have upgraded to Xcode4, and I'm experiencing unbelievable situation. Xcode4 crashes per 5 minute. Incredibly slow. Almost impossible to use. Maybe the problem is my hardware configuration. I'm using MacBook Air 3rd with 2GB ram with SSD. It was just fine with Xcode3, but now, it consumes all of memory and crashes too often. Does your Xcode4 stable? If so, please let me know what's your hardware configuration. I want to know whether this problem is caused by hardware configuration or not to decide buy a new mac.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >