Search Results

Search found 61071 results on 2443 pages for 'system io log'.

Page 324/2443 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • CastClassException on Custom View

    - by tuxGurl
    When I try to findViewById() on my custom view I keep getting a ClassCastException. I've tried so many things that I'm sure I've botched the code now! To make sure I'm not going insane I stripped down the classes to their bare minimum inorder to find what was wrong. I'm new to android programming and I'm sure I'm missing something basic. This is BaseImageView an extended view class. package com.company.product.client.android.gui.views; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.view.View; public class BaseImageView extends View { public BaseImageView(Context context) { super(context); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawColor(Color.GREEN); } } This is LiveImageView an extension of the BaseImageView class. package com.company.product.client.android.gui.views; import android.content.Context; import android.util.AttributeSet; public class LiveImageView extends BaseImageView { public LiveImageView(Context context, AttributeSet attrs) { super(context); } } Here is the Layout my_view.xml. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="center"> <View class="com.company.product.client.android.gui.views.LiveImageView" android:id="@+id/lvImage" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> And here is the onCreate in my Activity LiveViewActivity. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); try { setContentView(R.layout.my_view); final LiveImageView lvImage = (LiveImageView) findViewById(R.id.lvImage); } catch (final Exception e) { Log.e(TAG, "onCreate() Exception: " + e.toString()); e.printStackTrace(); } Finally, this is stack trace. 02-11 17:25:24.829: ERROR/LiveViewActivity(1942): onCreate() Exception: java.lang.ClassCastException: android.view.View 02-11 17:25:24.839: WARN/System.err(1942): java.lang.ClassCastException: android.view.View 02-11 17:25:24.839: WARN/System.err(1942): at com.company.product.client.android.gui.screen.LiveViewActivity.onCreate(LiveViewActivity.java:26) 02-11 17:25:24.839: WARN/System.err(1942): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-11 17:25:24.849: WARN/System.err(1942): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 02-11 17:25:24.849: WARN/System.err(1942): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 02-11 17:25:24.849: WARN/System.err(1942): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 02-11 17:25:24.849: WARN/System.err(1942): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 02-11 17:25:24.859: WARN/System.err(1942): at android.os.Handler.dispatchMessage(Handler.java:99) 02-11 17:25:24.859: WARN/System.err(1942): at android.os.Looper.loop(Looper.java:123) 02-11 17:25:24.859: WARN/System.err(1942): at android.app.ActivityThread.main(ActivityThread.java:4363) 02-11 17:25:24.869: WARN/System.err(1942): at java.lang.reflect.Method.invokeNative(Native Method) 02-11 17:25:24.869: WARN/System.err(1942): at java.lang.reflect.Method.invoke(Method.java:521) 02-11 17:25:24.869: WARN/System.err(1942): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 02-11 17:25:24.869: WARN/System.err(1942): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 02-11 17:25:24.879: WARN/System.err(1942): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • How do I launch a WPF app from command.com. I'm getting a FontCache error.

    - by jttraino
    I know this is not ideal, but my constraint is that I have a legacy application written in Clipper. I want to launch a new, WinForms/WPF application from inside the application (to ease transition). This legacy application written in Clipper launches using: SwpRunCmd("C:\MyApp\MyBat.bat",0) The batch file contains something like this command: C:\PROGRA~1\INTERN~1\iexplore "http://QASVR/MyApp/AppWin/MyCompany.MyApp.AppWin.application#MyCompany.MyApp.AppWin.application" It is launching a WinForms/WPF app that is we deploy via ClickOnce. Everything has been going well until we introduced WPF into the application. We were able to easily launch from the legacy application. Since we have introduced WPF, however, we have the following behavior. If we launch via the Clipper application first, we get an exception when launching the application. The error text is: The type initializer for 'System.Windows.FrameworkElement' threw an exception. at System.Windows.FrameworkElement..ctor() at System.Windows.Controls.Panel..ctor() at System.Windows.Controls.DockPanel..ctor() at System.Windows.Forms.Integration.AvalonAdapter..ctor(ElementHost hostControl) at System.Windows.Forms.Integration.ElementHost..ctor() at MyCompany.MyApp.AppWin.Main.InitializeComponent() at MyCompany.MyApp.AppWin.Main..ctor(String[] args) at MyCompany.MyApp.AppWin.Program.Main(String[] args) The type initializer for 'System.Windows.Documents.TextElement' threw an exception. at System.Windows.FrameworkElement..cctor() The type initializer for 'System.Windows.Media.FontFamily' threw an exception. at System.Windows.Media.FontFamily..ctor(String familyName) at System.Windows.SystemFonts.get_MessageFontFamily() at System.Windows.Documents.TextElement..cctor() The type initializer for 'MS.Internal.FontCache.Util' threw an exception. at MS.Internal.FontCache.Util.get_WindowsFontsUriObject() at System.Windows.Media.FontFamily.PreCreateDefaultFamilyCollection() at System.Windows.Media.FontFamily..cctor() Invalid URI: The format of the URI could not be determined. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString, UriKind uriKind) at MS.Internal.FontCache.Util..cctor() If we launch the application via the URL (in IE) or via the icon on the desktop first, we do not get the exception and application launches as expected. The neat thing is that whatever we launch with first determines whether the app will launch at all. So, if we launch with legacy first, it breaks right away and we can't get the app to run even if we launch with the otherwise successful URL or icon. To get it to work, we have to logout and log back in and start it from the URL or icon. If we first use the URL or the icon, we have no problem launching from the legacy application from that point forward (until we logout and come back in). One other piece of information is that we are able to simulate the problem in the following fashion. If we enter a command prompt using "cmd.exe" and execute a statement to launch from a URL, we are successful. If, however, we enter a command prompt using "command.com" and we execute that same statement, we experience the breaking behavior. We assume it is because the legacy application in Clipper uses the equivalent of command.com to create the shell to spawn the other app. We have tried a bunch of hacks like having command.com run cmd.exe or psexec and then executing, but nothing seems to work. We have some ideas for workarounds (like making the app launch on startup so we force the successful launch from a URL, making all subsequent launches successful), but they all are sub-optimal even though we have a great deal of control over our workstations. To reduce the chance that this is related to permissions, we have given the launching account administrative rights (as well as non-administrative rights in case that made a difference). Any ideas would be greatly-appreciate. Like I said, we have some work arounds, but I would love to avoid them. Thanks!

    Read the article

  • Jboss logging issue - pl check this

    - by balaji
    I’m Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web Please help..

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • JavaScript: this

    - by bdukes
    JavaScript is a language steeped in juxtaposition.  It was made to “look like Java,” yet is dynamic and classless.  From this origin, we get the new operator and the this keyword.  You are probably used to this referring to the current instance of a class, so what could it mean in a language without classes? In JavaScript, this refers to the object off of which a function is referenced when it is invoked (unless it is invoked via call or apply). What this means is that this is not bound to your function, and can change depending on how your function is invoked. It also means that this changes when declaring a function inside another function (i.e. each function has its own this), such as when writing a callback. Let's see some of this in action: var obj = { count: 0, increment: function () { this.count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(this.count); }, 1); } }; obj.increment(); console.log(obj.count); // 1 var increment = obj.increment; window.count = 'global count value: '; increment(); console.log(obj.count); // 1 console.log(window.count); // global count value: 1 var newObj = {count:50}; increment.call(newObj); console.log(newObj.count); // 51 obj.logAfterTimeout();// global count value: 1 obj.logAfterTimeout = function () { var proxiedFunction = $.proxy(function () { console.log(this.count); }, this); setTimeout(proxiedFunction, 1); }; obj.logAfterTimeout(); // 1 obj.logAfterTimeout = function () { var that = this; setTimeout(function () { console.log(that.count); }, 1); }; obj.logAfterTimeout(); // 1 The last couple of examples here demonstrate some methods for making sure you get the values you expect.  The first time logAfterTimeout is redefined, we use jQuery.proxy to create a new function which has its this permanently set to the passed in value (in this case, the current this).  The second time logAfterTimeout is redefined, we save the value of this in a variable (named that in this case, also often named self) and use the new variable in place of this. Now, all of this is to clarify what’s going on when you use this.  However, it’s pretty easy to avoid using this altogether in your code (especially in the way I’ve demonstrated above).  Instead of using this.count all over the place, it would have been much easier if I’d made count a variable instead of a property, and then I wouldn’t have to use this to refer to it.  var obj = (function () { var count = 0; return { increment: function () { count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(count); }, 1); }, getCount: function () { return count; } }; }()); If you’re writing your code in this way, the main place you’ll run into issues with this is when handling DOM events (where this is the element on which the event occurred).  In that case, just be careful when using a callback within that event handler, that you’re not expecting this to still refer to the element (and use proxy or that/self if you need to refer to it). Finally, as demonstrated in the example, you can use call or apply on a function to set its this value.  This isn’t often needed, but you may also want to know that you can use apply to pass in an array of arguments to a function (e.g. console.log.apply(console, [1, 2, 3, 4])).

    Read the article

  • VMWare player - compiling server modules - Ubuntu 13.10

    - by user211976
    While running Ubuntu 13.04 whenever the Linux kernel had been updated, this used to make vmware player happy: sudo apt-get install linux-headers-$(uname -r) sudo vmware-modconfig --console --install-all Yesterday I upgraded to Ubuntu 13.10 and lo and behold, the above workaround does not work anymore: Unable to install all modules. See log for details. I assume by "See log" it means the files in /tmp/vmware-root/*log root@hugin:/tmp/vmware-root# ls -ltr /tmp/vmware-root/ totalt 16 -rw-r--r-- 1 root root 3815 nov 6 13:54 vmware-apploader-17267.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17693.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17742.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18701.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18750.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19100.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19149.log -rw-r--r-- 1 root root 9250 nov 6 13:54 vmware-modconfig-17267.log root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-modconfig-17267.log 2013-11-06T13:54:28.950+01:00| modconfig| I120: Copied Module.symvers from "/tmp/modconfig-wpDrtf/vmci-only/Module.symvers" to "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:28.950+01:00| modconfig| I120: Building module with command "/usr/bin/make -j8 -C /tmp/modconfig-wpDrtf/vsock-only auto-build HEADER_DIR=/lib/modules/3.11.0-12-generic/build/include CC=/usr/bin/gcc IS_GCC_3=no" 2013-11-06T13:54:31.048+01:00| modconfig| I120: Successfully built vsock. Module is currently at "/tmp/modconfig-wpDrtf/vsock.o". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Found the vsock symvers file at "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Installing vsock from /tmp/modconfig-wpDrtf/vsock.o to /lib/modules/3.11.0-12-generic/misc/vsock.ko. 2013-11-06T13:54:31.048+01:00| modconfig| I120: Registering file "/lib/modules/3.11.0-12-generic/misc/vsock.ko". 2013-11-06T13:54:31.400+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. 2013-11-06T13:54:31.400+01:00| modconfig| I120: Registering file "/usr/lib/vmware/symvers/vsock-3.11.0-12-generic". 2013-11-06T13:54:31.764+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0vmware-installer" exited with status 0. 2013-11-06T13:54:31.786+01:00| modconfig| I120: We are now shutdown. Ready to die! root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-apploader-17267.log 2013-11-06T13:54:20.911+01:00| appLoader| I120: libglib-2.0.so.0 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libz.so.1 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libvmware-modconfig-console.so <SHIPPED> 2013-11-06T13:54:20.912+01:00| appLoader| I120: Shipped glib version is 2.24 2013-11-06T13:54:20.912+01:00| appLoader| I120: System glib version is 2.38 2013-11-06T13:54:20.912+01:00| appLoader| I120: Using system version of glib. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libgcc_s.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libglib-2.0.so.0. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libz.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading shipped version of libxml2.so.2.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Some OBI EE Tricks and Tips in the Admin Tool By Gerry Langton

    - by hamsun
    How to set the log level from a Session variable Initialization block As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level. The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment. Open the appropriate rpd in on-line mode and navigate to Manage Variables. Select Session Initialization Blocks, right click in the white space and create a New Initialization Block. I called the Initialization block Set_Loglevel . Now click on ‘Edit Data Source’ to enter the SQL. Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode. The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid. Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block. Check the ‘Enable any user to set the value’ option so that this will work for any user. Click OK and the following message will display as LOGLEVEL is a system session variable: Click ‘Yes’. Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes. To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value. If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor. Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box: SET VARIABLE LOGLEVEL = 2; Click the ‘Apply SQL’ button. The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log. You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at [email protected] if you have any further questions. About the Author: Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

    Read the article

  • XNA C# How to draw fonts in different color

    - by XNA newbie
    I'm doing a simple chat system with XNA C#. It is a chatbox that contains 5 lines of chat typed by the users. Something like a MMORPG chatting system. [User1name] says: Hi [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. When the user pressed 'ENTER', the text he entered will be added inside an ArrayList chatList.Add(s); For displaying the text he entered, I used for (int i = 0; i < chatList.Count(); i++) { spriteBatch.DrawString(font, chatList[i], new Vector2(40, arr1[i]), Color.Yellow); } *arr1[i] contains 5 y-axis points to print my 5 line of chats in the chatbox Question1: What if I have another type of message which will be added into ChatList (something like a system message). I need the System Message to be printed out in red color. And if the user keeps on chatting, the chat box will be updated according: (MAX 5 LINES) The newest chat will be shown below, and the oldest one will be deleted if they reached the max 5 lines. [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. [User1name] says: Ok, great to hear that! I'm having trouble to print each line color according to their msg type. For normal msg, it should be yellow. For system msg, it should be red. Question2: And for the next problem, I need the chat texts to be white color, while the names of the user is yellow (like warcraft3 chat system). How do I do that? I have a hard time thinking of a solution for these to work. Advise needed.

    Read the article

  • android app unable to connect to the hsqldb server

    - by Chinta
    I am trying to connect my android app to the hsql db server. Server runs on computer-1. I can connect to the db server from local machine through java as well as Db-visualizer. I can connect to the db server from another computer(computer-2) using Db-visualizer with comouter-1 ip address. Now trying to connect from my app in Nexus 7 the same way I was connecting from computer-2. I am getting "No Suitable Driver" error. Below is the log. 11-02 12:01:41.235: W/System.err(9803): connection string <jdbc:hsqldb:hsql://192.168.2.6:9001/qBank> 11-02 12:01:41.235: W/System.err(9803): user id string <SA> 11-02 12:01:41.235: W/System.err(9803): password string <> 11-02 12:01:41.235: W/System.err(9803): ERROR: failed to get connection. 11-02 12:01:41.235: W/System.err(9803): java.sql.SQLException: No suitable driver 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:186) 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:213) 11-02 12:01:41.235: W/System.err(9803): at com.scan.util.GatherData.getConnection(GatherData.java:135)

    Read the article

  • What does the crash mean? And why is my Ubuntu Blackbox is crashing how can i check deeply?

    - by YumYumYum
    My system was running for a while amount of 6 hour. Two times i loss remote access and it was not functioning anymore IP is gone etc etc. 3 time showing crash but i have no idea what and why. How to know what went wrong? $ last sun pts/0 d51a429c9.access Mon Mar 19 13:44 still logged in sun tty7 :0 Mon Mar 19 12:17 still logged in reboot system boot 2.6.38-8-generic Mon Mar 19 12:17 - 13:49 (01:31) sun pts/0 d51a429c9.access Mon Mar 19 10:05 - crash (02:12) sun tty7 :0 Mon Mar 19 10:00 - crash (02:16) reboot system boot 2.6.38-8-generic Mon Mar 19 10:00 - 13:49 (03:48) sun pts/0 d51a429c9.access Mon Mar 19 09:24 - down (00:35) sun tty7 :0 Mon Mar 19 09:20 - down (00:39) reboot system boot 2.6.38-8-generic Mon Mar 19 09:20 - 10:00 (00:39) sun pts/2 d51a429c9.access Sun Mar 18 18:04 - down (01:14) sun pts/1 d51a429c9.access Sun Mar 18 17:43 - down (01:35) sun pts/0 d51a429c9.access Sun Mar 18 15:07 - 18:47 (03:40) sun pts/1 d51a429c9.access Sun Mar 18 12:58 - 17:42 (04:43) sun pts/0 d51a429c9.access Sun Mar 18 10:21 - 15:06 (04:44) sun tty7 :0 Sun Mar 18 08:56 - down (10:22) reboot system boot 2.6.38-8-generic Sun Mar 18 08:56 - 19:19 (10:22) sun tty7 :0 Sat Mar 17 18:03 - down (14:51) reboot system boot 2.6.38-8-generic Sat Mar 17 18:03 - 08:55 (14:51) sun tty7 :0 Sat Mar 17 15:00 - down (01:38) reboot system boot 2.6.38-8-generic Sat Mar 17 15:00 - 16:39 (01:38) sun pts/0 d51a4297d.access Sat Mar 17 10:45 - 14:32 (03:46) sun tty7 :0 Fri Mar 16 18:46 - crash (20:14) reboot system boot 2.6.38-8-generic Fri Mar 16 18:46 - 16:39 (21:53) $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +27.8°C (crit = +100.0°C) temp2: +29.8°C (crit = +100.0°C)

    Read the article

  • nvidia driver problems after upgrading to 3.2.0-26 on Ubuntu 12.04 64bit

    - by Lev Levitsky
    After installing latest updates I can't set screen resolution higher than 1024x768; every time after the boot I get a message Could not apply the stored configuration for the monitors (Note: removing ~/.config/monitors.xml stopped the message, but not the problem) I can boot with 3.2.0-25 and the graphics look normal. Here's what I have in /var/log/apt/term.log (excerpt): Setting up linux-image-3.2.0-26-generic (3.2.0-26.41) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic Error! Problems with depmod detected. Automatically uninstalling this module. DKMS: Install Failed (depmod problems). Module rolled back to built state. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic update-initramfs: Generating /boot/initrd.img-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-26-generic Found initrd image: /boot/initrd.img-3.2.0-26-generic Found linux image: /boot/vmlinuz-3.2.0-25-generic Found initrd image: /boot/initrd.img-3.2.0-25-generic Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic Found linux image: /boot/vmlinuz-3.2.0-23-generic Found initrd image: /boot/initrd.img-3.2.0-23-generic Found linux image: /boot/vmlinuz-3.0.0-17-generic Found initrd image: /boot/initrd.img-3.0.0-17-generic Found memtest86+ image: /boot/memtest86+.bin I went to "additional drivers" and saw some updates available there, but an attempt to install them failed, leaving the following in /var/log/jockey.log (long log, pasted here). The full log won't fit in the question, so I'm showing $ fgrep 'ERROR' /var/log/jockey.log 2012-06-30 17:29:57,897 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2012-06-30 17:29:57,937 WARNING: modinfo for module wl failed: ERROR: modinfo: could not find module wl 2012-06-30 17:29:58,072 WARNING: modinfo for module nvidia_96 failed: ERROR: modinfo: could not find module nvidia_96 2012-06-30 17:29:58,240 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-06-30 17:29:58,293 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-06-30 17:29:58,351 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-06-30 17:29:58,385 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-06-30 17:29:58,420 WARNING: modinfo for module nvidia_96_updates failed: ERROR: modinfo: could not find module nvidia_96_updates 2012-06-30 17:29:58,455 WARNING: modinfo for module ath_pci failed: ERROR: modinfo: could not find module ath_pci 2012-06-30 17:29:58,478 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-06-30 17:29:58,531 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-06-30 17:29:58,588 WARNING: modinfo for module omapdrm_pvr failed: ERROR: modinfo: could not find module omapdrm_pvr 2012-06-30 17:29:59,537 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-06-30 17:29:59,613 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-06-30 17:29:59,686 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-06-30 17:29:59,764 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-06-30 17:30:29,544 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-06-30 17:30:29,545 ERROR: XorgDriverHandler.enable(): package or module not installed, aborting I'm not sure if it's a bug, as the first log shows some errors. What can I try?

    Read the article

  • What can I do to utilize all my hard disk space?

    - by Twatcher
    I had windows XP running on my computer. Then I installed Ubuntu from under windows. Then I decided I wanted to have only Ubuntu also because I got a system message that I am out of disk space. I loaded up my system from a live Ubuntu DVD and deleted the partition with windows on it and also the other partition that had my data on it. I expanded the partition which I thought to be the system partition (since there was no other partition left It had ext format. After that Ubuntu was working fine and I thought I have enough disk space, since my harddrive is an 80 GB ATA Maxtor. I left a small partition as backup. But after downloading a small amount of files I got the message again, that I am running out of disk space. I don't now. How can UI make my disk space bigger? I am not used to Ubuntu's file system, and I don't have the overview on how I can actually see how much space there is left for me to use. I have basically now 1 partition with the system on it and one small backup (as far as I understand). My system is (from system utility) Ubuntu 12.04 LS 3,9 GB Intel Core 2 2,4 Ghz 80 GB ATA Maxtor Here are the results for sudo fdisk -l Disk /dev/sda: 80.0 GB, 79998918144 bytes<br> 255 heads, 63 sectors/track, 9725 cylinders, total 156247887 sectors<br> Units = sectors of 1 * 512 = 512 bytes<br> Sector size (logical/physical): 512 bytes / 512 bytes<br> I/O size (minimum/optimal): 512 bytes / 512 bytes<br> Disk identifier: 0x41ab2316<br> Device Boot Start End Blocks Id System<br> /dev/sda1 * 63 123750399 61875168+ 7 HPFS/NTFS/exFAT<br> /dev/sda2 123750400 156246015 16247808 7 HPFS/NTFS/exFAT<br>

    Read the article

  • How to remove the last character from Stringbuilder

    - by hmloo
    We usually use StringBuilder to append string in loops and make a string of each data separated by a delimiter. but you always end up with an extra delimiter at the end. This code sample shows how to remove the last delimiter from a StringBuilder. using System; using System.Collections.Generic; using System.Text; using System.Linq; class Program { static void Main() { var list =Enumerable.Range(0, 10).ToArray(); StringBuilder sb = new StringBuilder(); foreach(var item in list) { sb.Append(item).Append(","); } sb.Length--;//Just reduce the length of StringBuilder, it's so easy Console.WriteLine(sb); } } //Output : 0,1,2,3,4,5,6,7,8,9 Alternatively,  we can use string.Join for the same results, please refer to blow code sample. using System; using System.Collections.Generic; using System.Text; using System.Linq; class Program { static void Main() { var list = Enumerable.Range(0, 10).Select(n => n.ToString()).ToArray(); string str = string.Join(",", list); Console.WriteLine(str); } }

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Unhandled Exception: C# RESTful Webservice

    - by Debby
    Hi, I am trying a simple C# Restful Webservice example. I have the service running. I create a console client to test the Webservice, i get the following exception: Unhandled Exception: System.ServiceModel.CommunicationException: Internal Server Error Server stack trace: at System.ServiceModel.Dispatcher.WebFaultClientMessageInspector.AfterReceiveReply(Message& reply, Object correlationState ) at System.ServiceModel.Dispatcher.ImmutableClientRuntime.AfterReceiveReply(ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object [] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime ope ration) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at WebServiceClient.IService.GetData(String Data) at TestClient.Program.Main() in C:\My Documents\Visual Studio 2008\Projects\WebServiceTesting\WebServiceClient\WebServiceC lient\Program.cs:line 38 Does anyone know, why I am getting this unhandled exception and what can be done?

    Read the article

  • Spark view engine and ASP.NET MVC 2 strongly Typed Html Helpers

    - by dekko
    Hi. I try to use HtmlHelper.TextBoxFor with spark view engine but view crashed with exception "Dynamic view compilation failed. 'System.Web.Mvc.HtmlHelper' does not contain a definition for 'TextBoxFor' and no extension method 'TextBoxFor' accepting a first argument of type 'System.Web.Mvc.HtmlHelper' could be found (are you missing a using directive or an assembly reference?)". It is my _global.spark: <use namespace="System"/> <use namespace="System.Linq"/> <use namespace="System.Text" /> <use namespace="System.Web.Mvc"/> <use namespace="System.Web.Mvc.Html"/> <use namespace="System.Web.Routing"/> <use namespace="System.Linq.Expressions" /> <use namespace="MyModels" /> In spark-view using: ${Html.TextBoxFor(m = m.UserName)}

    Read the article

  • How to deserialize "<MyType><StartDate>01/01/2000</StartDate></MyType>"

    - by afin
    How to deserialize "<MyType><StartDate>01/01/2000</StartDate></MyType>" below is the MyType definition [Serializable] public class MyType { DateTime _StartDate; public DateTime StartDate { set { _StartDate = value; } get { return _StartDate; } } } Got the following error while deserializing {"The string '01/01/2000' is not a valid AllXsd value."} [System.FormatException]: {"The string '01/01/2000' is not a valid AllXsd value."} Data: {System.Collections.ListDictionaryInternal} HelpLink: null InnerException: null Message: "The string '01/01/2000' is not a valid AllXsd value." Source: "System.Xml" StackTrace: " at System.Xml.Schema.XsdDateTime..ctor(String text, XsdDateTimeFlags kinds)\r\n at System.Xml.XmlConvert.ToDateTime(String s, XmlDateTimeSerializationMode dateTimeOption)\r\n at System.Xml.Serialization.XmlCustomFormatter.ToDateTime(String value)\r\n at System.Xml.Serialization.XmlSerializationReader.ToDateTime(String value)\r\n at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMyType.Read2_MyType(Boolean isNullable, Boolean checkType)\r\n at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMyType.Read3_MyType()" TargetSite: {Void .ctor(System.String, System.Xml.Schema.XsdDateTimeFlags)}

    Read the article

  • How to link paid app user account to the system ?

    - by user164589
    Hi guys, I have an issue related publishing the paid app to android market. (My application is internet connection based app.) If I've put the app to the android market, can user who bought the app pass to anyone ? How is its security (I mean safe of .apk file) ? Also, what is payment tool of android market ? My main point is choosing the best way to link paid user to our system. Actually I don't know how to link paid user account to my system(by email address or device unique id ?... what is better way ?). Can you suggest me on this part ? I really appreciate for help. Thanks in advance.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >