Search Results

Search found 7645 results on 306 pages for 'report matrix'.

Page 293/306 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Managing BES Software Configurations

    - by DaveJohnston
    Hi, I am having problems with OTA deployment of a bespoke application that we have written. I have read loads of threads elsewhere and I have got mixed help, but for my particular case none of it has really helped. So I thought I would explain my exact situation and try and get some help here. I am running BES version 4.1.5 (Bundle 79) for Microsoft Exchange. The application we have written is split into 5 modules, which we control, and another 4 modules which are 3rd party libraries that we require. So for our modules the version numbers are regularly changing but for the others they are pretty much always going to remain the same. We have an alx file set up that identifies all of the files required and in fact I am able to create a software configuration and deploy the application with no problems. What I am trying to do however is maintain multiple versions of our application on the BES and be able to select which version I want to deploy to each user. I have tried this a number of ways (as I said I have read lots of other threads with solutions to this problem) but each seems to come with its own problem. First of all I tried just creating different configurations for each version of the application, but because they each had the same application ID the BES informed me that I couldn't do this. I read somewhere that the solution was to create a second shared folder (e.g. \Program Files\Common Files\RIM) and add the apploader stuff and the new version of the app to this folder. I could then create a second software configuration that would have the same application ID. The result of this seemed promising to start with. When I changed the config that was assigned to a user the new version was pushed out fine. But afterwards the BES reported that the device state was invalid, which meant I couldn't push anything else until I reactivated the device. I guess this is because the first config was never set to disallowed so the old version wasn't removed and the device essentially reported that it had multiple versions of the same application installed. The next suggestion I got was to change the application ID for each version, e.g. to include the version number. This meant that each version of the application could be included in a single configuration and I could set one to disallowed and the other to required. Initially this worked and the first version was deployed. But when I switched (i.e. the old version became disallowed and the new version required) the BES reported upgrade required and removed the old version. The device restarts and the old version is gone but the new version is not pushed out. I checked the BES and it still said Upgrade Required. I checked the log files and found: [40000] (11/12 09:50:27.397):{0xEB8} {[email protected], PIN=1234, UserId=2}SCS::PollDBQueueNewRequests - Queuing POLL_FOR_MISSING_APPS request [40000] (11/12 09:50:28.241):{0xE9C} RequestHandler::PollForMissingApps: Starting Poll For Missing Apps. [40304] (11/12 09:50:28.241):{0xE90} WorkerThreadPool:: ThreadProc(): Thread released with empty queue [40000] (11/12 09:50:28.241):{0xE9C} SCS::RemoveAppDeliveryRequests - No App Delivery Requests purged for User id 2 [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [40000] (11/12 09:50:29.163):{0xE9C} RequestHandler::PollForMissingApps: Completed Poll For Missing Apps, elapsed time 0.922 seconds. (You will notice I have removed actual names and email addresses etc for privacy reasons. But one question: where does the name of the module group come from? In my case it is close to the application ID but doesn't include the version number that I added at the end in order to get it to work. Is that information embedded in a COD file or something??) So it is reporting a duplicate module group on the device? What does this mean? I checked the device properties (as reported on the BES) and it confirms that the modules with the old version numbers are still present on the device. So the application has been removed but not the modules?? I checked the device and the modules are gone, so it is just the BES reporting that they are still there?? I checked the database and it has the modules in questions in the SyncDeviceMgmt table. If I delete these from the DB the BES changes to report Install Required, and low and behold the new version of the app is pushed out. So at the end of all that, my question is: does anyone have any other suggestions of how to handle upgrading our bespoke application OTA from the BES? Or can anyone point out something I am doing wrong in what I described above that might solve the problems I am having? I guess the question is why does the database maintain that the modules are on the device after they are removed? Thanks for any help you can provide.

    Read the article

  • 42 passed to TerminateProcess, sometimes GetExitCodeProcess returns 0

    - by Emil
    After I get a handle returned by CreateProcess, I call TerminateProcess, passing 42 for the process exit code. Then, I use WaitForSingleObject for the process to terminate, and finally I call GetExitCodeProcess. None of the function calls report errors. The child process is an infinite loop and does not terminate on its own. The problem is that sometimes GetExitCodeProcess returns 42 for the exit code (as it should) and sometimes it returns 0. Any idea why? #include <string> #include <sstream> #include <iostream> #include <assert.h> #include <windows.h> void check_call( bool result, char const * call ); #define CHECK_CALL(call) check_call(call,#call); int main( int argc, char const * argv[] ) { if( argc>1 ) { assert( !strcmp(argv[1],"inf") ); for(;;) { } } int err=0; for( int i=0; i!=200; ++i ) { STARTUPINFO sinfo; ZeroMemory(&sinfo,sizeof(STARTUPINFO)); sinfo.cb=sizeof(STARTUPINFO); PROCESS_INFORMATION pe; char cmd_line[32768]; strcat(strcpy(cmd_line,argv[0])," inf"); CHECK_CALL((CreateProcess(0,cmd_line,0,0,TRUE,0,0,0,&sinfo,&pe)!=0)); CHECK_CALL((CloseHandle(pe.hThread)!=0)); CHECK_CALL((TerminateProcess(pe.hProcess,42)!=0)); CHECK_CALL((WaitForSingleObject(pe.hProcess,INFINITE)==WAIT_OBJECT_0)); DWORD ec=0; CHECK_CALL((GetExitCodeProcess(pe.hProcess,&ec)!=0)); CHECK_CALL((CloseHandle(pe.hProcess)!=0)); err += (ec!=42); } std::cout << err; return 0; } std::string get_last_error_str( DWORD err ) { std::ostringstream s; s << err; LPVOID lpMsgBuf=0; if( FormatMessageA( FORMAT_MESSAGE_ALLOCATE_BUFFER|FORMAT_MESSAGE_FROM_SYSTEM|FORMAT_MESSAGE_IGNORE_INSERTS, 0, err, MAKELANGID(LANG_NEUTRAL,SUBLANG_DEFAULT), (LPSTR)&lpMsgBuf, 0, 0) ) { assert(lpMsgBuf!=0); std::string msg; try { std::string((LPCSTR)lpMsgBuf).swap(msg); } catch( ... ) { } LocalFree(lpMsgBuf); if( !msg.empty() && msg[msg.size()-1]=='\n' ) msg.resize(msg.size()-1); if( !msg.empty() && msg[msg.size()-1]=='\r' ) msg.resize(msg.size()-1); s << ", \"" << msg << '"'; } return s.str(); } void check_call( bool result, char const * call ) { assert(call && *call); if( !result ) { std::cerr << call << " failed.\nGetLastError:" << get_last_error_str(GetLastError()) << std::endl; exit(2); } }

    Read the article

  • How do I solve "Two different CRTLDLLs are loaded" when using packages in C++ Builder 2010?

    - by David M
    Hi, We are trying to split up our monolithic EXE into a combination of an EXE and several packages. So far, we have one package that we're trying to use, and when running the EXE Codeguard shows the following error on startup: CG Error Two different CRTLDLLs are loaded. CG might report false errors (C:\Windows\system32\CC32100MT.DLL) (D:\Projects\Foo\Bar.bpl) OK I read this as two different runtime libraries being loaded - one, the correct one (CC32100MT.dll), one incorrect, which is the package we're trying to use. Continuing to run the program shows odd errors, especially casting between classes or passing a pointer to a class as a parameter in a method that crosses the EXE/DLL boundary. Codeguard itself doesn't show any other errors at all though. How do we solve this? Some more details We've looked at as many things as we (the developer working on this and I) can collectively think of: Each project is built using runtime packages. The EXE host lists Bar in its package list. Each project is set to compile with dynamic RTL. However, changing this does not solve the problem. The package is linked to the EXE via its BPI file, but linking via a LIB makes no difference either. The EXE and BPL are compiled with the same project settings, where the same options exist for both types of project. We think, anyway :) There is only one copy of the BPL and BPI on the system: it's definitely linking to the right one. Examining the EXE and BPL with Depends and TDump show they are both using C:\Windows\system32\CC32100MT.DLL. They should both be using the one RTL. Creating a new project (a plain VCL forms application) and linking to the BPL (via its BPI) works fine. Something in the process of adding all the files and LIBs that make our EXE contain the code it needs to changes this, but we haven't been able to figure out what. The LIBs all either correspond to DLLs we use (flat C interface, usually look as though they were built with MSVC) or are simple projects with lots of related files, compiled to a lib for the purpose of linking into the EXE - these correspond roughly to the areas of the program we want to split to BPLs, by the way. There don't seem to be project options for the LIB projects that would affect RTL linking, unless we've missed them. I have exhaustively hunted through Depends and looked at all RTL and CC32*.dll files the EXE and every single DLL references. All are identical: rtl140.bpl and CC32100MT.DLL. Fully qualified paths show they are the same files, too. Everything should be using the one same run-time library. We're stumped. Absolutely stumped. We've had other problems using BPLs (they seem to be surprisingly tricky things, especially using C++) but have managed to solve them all. This one we've had no luck at all and we'd really appreciate any insights :) We're using C++Builder 2010 (as part of RAD Studio actually, but with little Delphi code apart from components.)

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • NPE when drawing TabWidget in android (only on HTC Magic?)

    - by David Hedlund
    I've received report from the user of an app I've written that he gets FC whenever starting a certain activity. I have not been able to reproduce the issue on the emulator or on my HTC Hero (running 1.5), but this user running HTC Magic (with 1.6) is facing this error every time. What bothers me is that no single step in the stacktrace actually includes any code in my app (com.filmtipset) 01-07 00:10:26.773 I/ActivityManager( 141): Starting activity: Intent { cmp=com.filmtipset/.ViewMovie (has extras) } 01-07 00:10:27.023 D/AndroidRuntime( 2402): Shutting down VM 01-07 00:10:27.023 W/dalvikvm( 2402): threadid=3: thread exiting with uncaught exception (group=0x4001e170) 01-07 00:10:27.023 E/AndroidRuntime( 2402): Uncaught handler: thread main exiting due to uncaught exception 01-07 00:10:27.083 E/AndroidRuntime( 2402): java.lang.NullPointerException 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.TabWidget.dispatchDraw(TabWidget.java:173) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.View.draw(View.java:6552) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.FrameLayout.draw(FrameLayout.java:352) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1531) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.View.draw(View.java:6552) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.FrameLayout.draw(FrameLayout.java:352) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1883) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.draw(ViewRoot.java:1332) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.performTraversals(ViewRoot.java:1097) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.handleMessage(ViewRoot.java:1613) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.os.Handler.dispatchMessage(Handler.java:99) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.os.Looper.loop(Looper.java:123) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.app.ActivityThread.main(ActivityThread.java:4320) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at java.lang.reflect.Method.invokeNative(Native Method) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at java.lang.reflect.Method.invoke(Method.java:521) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at dalvik.system.NativeStart.main(Native Method) Full dump here if I've missed anything of interest I'm guessing, then, that there might be something wrong with my layout. It's quite verbose, so I'm posting it here, rather than pasting the whole thing on SO. There is a tabwidget, where one tab is connected to the scrollview, svFilmInfo, and one to the linear layout llComments. The tab host is populated as such: Drawable commentSelector = getResources().getDrawable(R.drawable.tabcomment); Drawable infoSelector = getResources().getDrawable(R.drawable.tabinfo); mTabHost = getTabHost(); mTabHost.getTabWidget().setBackgroundColor(Color.BLACK); mTabHost.addTab(mTabHost.newTabSpec("tabInfo").setIndicator("Filminfo", infoSelector).setContent(R.id.svFilmInfo)); mTabHost.addTab(mTabHost.newTabSpec("tabInfo").setIndicator("Kommentarer", commentSelector).setContent(R.id.llComments)); Since I cannot reproduce the error myself, and since I cannot find any mention in the stack trace of what might be causing the error, I don't quite know where to start troubleshooting this. I'd appreciate any pointers.

    Read the article

  • Pointers to Derived Class Objects Losing vfptr

    - by duckworthd
    To begin, I am trying to write a run-of-the-mill, simple Ray Tracer. In my Ray Tracer, I have multiple types of geometries in the world, all derived from a base class called "SceneObject". I've included the header for it here. /** Interface for all objects that will appear in a scene */ class SceneObject { public: mat4 M, M_inv; Color c; SceneObject(); ~SceneObject(); /** The transformation matrix to be applied to all points of this object. Identity leaves the object in world frame. */ void setMatrix(mat4 M); void setMatrix(MatrixStack mStack); void getMatrix(mat4& M); /** The color of the object */ void setColor(Color c); void getColor(Color& c); /** Alter one portion of the color, leaving the rest as they were. */ void setDiffuse(vec3 rgb); void setSpecular(vec3 rgb); void setEmission(vec3 rgb); void setAmbient(vec3 rgb); void setShininess(double s); /** Fills 'inter' with information regarding an intersection between this object and 'ray'. Ray should be in world frame. */ virtual void intersect(Intersection& inter, Ray ray) = 0; /** Returns a copy of this SceneObject */ virtual SceneObject* clone() = 0; /** Print information regarding this SceneObject for debugging */ virtual void print() = 0; }; As you can see, I've included a couple virtual functions to be implemented elsewhere. In this case, I have only two derived class -- Sphere and Triangle, both of which implement the missing member functions. Finally, I have a Parser class, which is full of static methods that do the actual "Ray Tracing" part. Here's a couple snippets for relevant portions void Parser::trace(Camera cam, Scene scene, string outputFile, int maxDepth) { int width = cam.getNumXPixels(); int height = cam.getNumYPixels(); vector<vector<vec3>> colors; colors.clear(); for (int i = 0; i< width; i++) { vector<vec3> ys; for (int j = 0; j<height; j++) { Intersection intrsct; Ray ray; cam.getRay(ray, i, j); vec3 color; printf("Obtaining color for Ray[%d,%d]\n", i,j); getColor(color, scene, ray, maxDepth); ys.push_back(color); } colors.push_back(ys); } printImage(colors, width, height, outputFile); } void Parser::getColor(vec3& color, Scene scene, Ray ray, int numBounces) { Intersection inter; scene.intersect(inter,ray); if(inter.isIntersecting()){ Color c; inter.getColor(c); c.getAmbient(color); } else { color = vec3(0,0,0); } } Right now, I've forgone the true Ray Tracing part and instead simply return the color of the first object hit, if any. As you have no doubt noticed, the only way the computer knows that a ray has intersected an object is through Scene.intersect(), which I also include. void Scene::intersect(Intersection& i, Ray r) { Intersection result; result.setDistance(numeric_limits<double>::infinity()); result.setIsIntersecting(false); double oldDist; result.getDistance(oldDist); /* Cycle through all objects, making result the closest one */ for(int ind=0; ind<objects.size(); ind++){ SceneObject* thisObj = objects[ind]; Intersection betterIntersect; thisObj->intersect(betterIntersect, r); double newDist; betterIntersect.getDistance(newDist); if (newDist < oldDist){ result = betterIntersect; oldDist = newDist; } } i = result; } Alright, now for the problem. I begin by creating a scene and filling it with objects outside of the Parser::trace() method. Now for some odd reason, I cast Ray for i=j=0 and everything works wonderfully. However, by the time the second ray is cast all of the objects stored in my Scene no longer recognize their vfptr's! I stepped through the code with a debugger and found that the information to all the vfptr's are lost somewhere between the end of getColor() and the continuation of the loop. However, if I change the arguments of getColor() to use a Scene& instead of a Scene, then no loss occurs. What crazy voodoo is this?

    Read the article

  • WP7: play MP3 using Media with phonegap/Cordova

    - by Loda
    My problem: I use the Media Class from Cordova. The MP3 file is only played once (the first time). Code: Add this code to the Cordova Starter project to reproduce my problem: var playCounter = 0; function playMP3(){ console.log("playMP3() counter " + playCounter); var my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media.play(); playCounter++; } [...] <p onclick="playMP3();">Click to Play MP3</p> VS output: [...] GapBrowser_Navigated :: /app/www/index.html 'UI Task' (Managed): Loaded 'System.ServiceModel.Web.dll' 'UI Task' (Managed): Loaded 'System.ServiceModel.dll' Log:"onDeviceReady. You should see this message in Visual Studio's output window." 'UI Task' (Managed): Loaded 'Microsoft.Xna.Framework.dll' Log:"playMP3() counter 0" 'UI Task' (Managed): Loaded 'System.SR.dll' Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 1}" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 2}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 2, \"value\": 2.141}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 4}" Log:"playMP3() counter 1" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IOException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IsolatedStorage.IsolatedStorageException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"2de3388c-bbb6-d896-9e27-660f1402bc2a\", \"msg\": 9, \"value\": 5}" My Config: cordova-1.6.1.js Lumia 800 WP 7.5 (7.10.7740.16) WorkAround (kind of): Desactivate the app (turn off the screen) reactivate the app (turn on the screen) - you get one more shot. Any help is welcome as I am blocked on this since may days and I found no usefull information anywhere. Also, Can you tell me if this code work on your config ? . . . Update: add a demo code using a global var. Keeping the instance alive. result The test2.mp3 is played and can replay fine. the test.mp3 is not played at all. It is the first file you play that will work. Code function onDeviceReady() { document.getElementById("welcomeMsg").innerHTML += "Cordova is ready! version=" + window.device.cordova; console.log("onDeviceReady. You should see this message in Visual Studio's output window."); my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media2 = new Media("app/www/test2.mp3");//ressource buildAction == content } var playCounter = 0; var my_media = null; function playMP3(){ console.log("playMP3() counter " + playCounter); my_media.play(); playCounter++; } var my_media2 = null; function playMP32(){ console.log("playMP32() counter " + playCounter); my_media2.play(); playCounter++; } </script> [...] <p onclick="playMP3();">Click to Play MP3</p> <p onclick="playMP32();">Click to Play MP3 2</p> VS output: Log:"onDeviceReady. You should see this message in Visual Studio's output window." INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 0" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 1" Log:"playMP3() counter 2" INFO: startPlayingAudio could not find mediaPlayer for b60fa266-d105-a295-a5be-fa2c6b824bc1 A first chance exception of type 'System.ArgumentException' occurred in System.Windows.dll Error: El parámetro es incorrecto. Log:"playMP32() counter 3" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Can anybody reproduce this ? link to bug report: https://issues.apache.org/jira/browse/CB-941

    Read the article

  • firefox, opera 'The connection was reset' on few POST method calls on Windows and Ubuntu

    - by Gopalakrishnan Subramani
    my website works well with GET method, also few POST methods. Some pages with POST method doesn't work. Some pages with POST work. For example, login page uses POST that works fine. When I post the data on webpage, firefox says "Connecting..." and finally report connection timed out error. The same behavior happens with Opera as well. However Google Chrome works fine. At the server side, I use nginx 1.2.4 with HTTPS and uwsgi for python (flask framework) app. I use geotrust certificate. The same behavior happens with Windows 7 and Ubuntu 12.04 on firefox. I tried firefox in safemode, but no luck. Set auto-detect proxy settings. no luck. Cleared all cookies. no luck Anyone help me to fix this issue? I am posting ngix config. shame on me. I use root, I know which is not advised. need to fix soon. user root; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; server { listen 80; server_name www.example.com; rewrite ^(.*) https://example.com$1 permanent; } server { listen 80; server_name example.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; server_name example.com; keepalive_timeout 70; ssl on; ssl_certificate /root/cc.cert; ssl_certificate_key /root/cc.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } } } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • How to find specific value of the node in xml file

    - by user2735149
    I am making windows phone 8 app based the webservices. This is my xml code: - <response> <timestamp>2013-10-31T08:30:56Z</timestamp> <resultsOffset>0</resultsOffset> <status>success</status> <resultsLimit>8</resultsLimit> <resultsCount>38</resultsCount> - <headlines> - <headlinesItem> <headline>City edge past Toon</headline> <keywords /> <lastModified>2013-10-30T23:45:22Z</lastModified> <audio /> <premium>false</premium> + <links> - <api> - <news> <href>http://api.espn.com/v1/sports/news/1600444?region=GB</href> </news> </api> - <web> <href>http://espnfc.com/uk/en/report/381799/city-edge-toon?ex_cid=espnapi_public</href> </web> - <mobile> <href>http://m.espn.go.com/soccer/gamecast?gameId=381799&lang=EN&ex_cid=espnapi_public</href> </mobile> </links> <type>snReport</type> <related /> <id>1600444</id> <story>Alvardo Negredo and Edin Dzeko struck in extra-time to book Manchester City's place in the last eight of the Capital One Cup, while Costel Pantilimon kept a clean sheet in the 2-0 win to keep the pressure on Joe Hart. </story> <linkText>Newcastle 0-2 Man City</linkText> - <images> - <imagesItem> <height>360</height> <alt>Man City celebrate after Edin Dzeko scored their second extra-time goal at Newcastle.</alt> <width>640</width> <name>Man City celeb Edin Dzeko goal v nufc 20131030 [640x360]</name> <caption>Man City celebrate after Edin Dzeko scored their second extra-time goal at Newcastle.</caption> <type>inline</type> <url>http://espnfc.com/design05/images/2013/1030/mancitycelebedindzekogoalvnufc20131030_640x360.jpg</url> </imagesItem> </images> Code behind: myData = XDocument.Parse(e.Result, LoadOptions.None); var data = myData.Descendants("headlines").FirstOrDefault(); var data1 = from query in myData.Descendants("headlinesItem") select new UpdataNews { News = (string)query.Element("headline").Value, Desc = (string)query.Element("description"), Newsurl = (string)query.Element("links").Element("mobile").Element("href"), Imageurl=(string)query.Element("images").Element("imagesItem").Element("url").Value, }; lstShow.ItemsSource = data1; I am trying to get value from xml tags and assign them to News,Desc, etc. Everything works fine except Imageurl, it shows NullException. I tried same method for Imageurl, i dont know whats going wrong. Help..

    Read the article

  • HLSL/XNA Ambient light texture mixed up with multi pass lighting

    - by Manu-EPITA
    I've been having some troubles lately with lighting. I have found a source on google which is working pretty good on the example. However, when I try to implement it to my current project, I am getting some very weird bugs. The main one is that my textures are "mixed up" when I only activate the ambient light, which means that a model gets the texture of another one . I am using the same effect for every meshes of my models. I guess this could be the problem, but I don't really know how to "reset" an effect for a new model. Is it possible? Here is my shader: float4x4 WVP; float4x4 WVP; float3x3 World; float3 Ke; float3 Ka; float3 Kd; float3 Ks; float specularPower; float3 globalAmbient; float3 lightColor; float3 eyePosition; float3 lightDirection; float3 lightPosition; float spotPower; texture2D Texture; sampler2D texSampler = sampler_state { Texture = <Texture>; MinFilter = anisotropic; MagFilter = anisotropic; MipFilter = linear; MaxAnisotropy = 16; }; struct VertexShaderInput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 PositionO: TEXCOORD1; float3 Normal : NORMAL0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WVP); output.Normal = input.Normal; output.PositionO = input.Position.xyz; output.Texture = input.Texture; return output; } float4 PSAmbient(VertexShaderOutput input) : COLOR0 { return float4(Ka*globalAmbient + Ke,1) * tex2D(texSampler,input.Texture); } float4 PSDirectionalLight(VertexShaderOutput input) : COLOR0 { //Difuze float3 L = normalize(-lightDirection); float diffuseLight = max(dot(input.Normal,L), 0); float3 diffuse = Kd*lightColor*diffuseLight; //Specular float3 V = normalize(eyePosition - input.PositionO); float3 H = normalize(L + V); float specularLight = pow(max(dot(input.Normal,H),0),specularPower); if(diffuseLight<=0) specularLight=0; float3 specular = Ks * lightColor * specularLight; //sum all light components float3 light = diffuse + specular; return float4(light,1) * tex2D(texSampler,input.Texture); } technique MultiPassLight { pass Ambient { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PSAmbient(); } pass Directional { PixelShader = compile ps_3_0 PSDirectionalLight(); } } And here is how I actually apply my effects: public void ApplyLights(ModelMesh mesh, Matrix world, Texture2D modelTexture, Camera camera, Effect effect, GraphicsDevice graphicsDevice) { graphicsDevice.BlendState = BlendState.Opaque; effect.CurrentTechnique.Passes["Ambient"].Apply(); foreach (ModelMeshPart part in mesh.MeshParts) { graphicsDevice.SetVertexBuffer(part.VertexBuffer); graphicsDevice.Indices = part.IndexBuffer; // Texturing graphicsDevice.BlendState = BlendState.AlphaBlend; if (modelTexture != null) { effect.Parameters["Texture"].SetValue( modelTexture ); } graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); // Applying our shader to all the mesh parts effect.Parameters["WVP"].SetValue( world * camera.View * camera.Projection ); effect.Parameters["World"].SetValue(world); effect.Parameters["eyePosition"].SetValue( camera.Position ); graphicsDevice.BlendState = BlendState.Additive; // Drawing lights foreach (DirectionalLight light in DirectionalLights) { effect.Parameters["lightColor"].SetValue(light.Color.ToVector3()); effect.Parameters["lightDirection"].SetValue(light.Direction); // Applying changes and drawing them effect.CurrentTechnique.Passes["Directional"].Apply(); graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); } } I am also applying this when loading the effect: effect.Parameters["lightColor"].SetValue(Color.White.ToVector3()); effect.Parameters["globalAmbient"].SetValue(Color.White.ToVector3()); effect.Parameters["Ke"].SetValue(0.0f); effect.Parameters["Ka"].SetValue(0.01f); effect.Parameters["Kd"].SetValue(1.0f); effect.Parameters["Ks"].SetValue(0.3f); effect.Parameters["specularPower"].SetValue(100); Thank you very much UPDATE: I tried to load an effect for each model when drawing, but it doesn't seem to have changed anything. I suppose it is because XNA detects that the effect has already been loaded before and doesn't want to load a new one. Any idea why?

    Read the article

  • log-back and thirdparty writing to stdout. How to stop them getting interleaved.

    - by David Roussel
    First some background. I have a batch-type java process run from a DOS batch script. All the java logging goes to stdout, and the batch script redirects the stdout to a file. (This is good for me because I can ECHO from the script and it gets into the log file, so I can see all the java JVM command line args, which is great for debugging.) I may not I use slf4j API, and for the backend I used to use log4j, but recently switched to logback-classic. Although all my application code uses slf4j, I have a third party library that does it's own logging (not using a standard API) which also gets written to stdout. The problem is that sometimes log lines get mixed up and don't cleanly appear on separate lines. Here is an example of some messed up output: 2010-05-28 18:00:44.783 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Bump parameters exist for scenario, now attempting bumping. [indexDisplayName=STANDARD_S1_v300] 2010-05-28 18:01:43.517 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Found adjusted point in data, now applying bump. [point=0.144040000000000] 2010-05-28 18:01:58.642 [thread-1 ] DEBUG com.company.request.Request - Generated request for [dealName=XXX_20050225_01[5],dealType=GENERIC_XXX,correlationType=2,copulaType=1] in 73.8 s, Simon Stopwatch: [sys1.batchpricer.reqgen.gen INHERIT] total 1049 s, counter 24, max 74.1 s, min 212 ms 2010-05-28 18:05/28/10 18:02:20.236 INFO: [ServiceEvent] SubmittedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23 01:58.658 [req-writer-2b ] INFO .c.g.r.o.OptionalFileDocumentOutput - Writing request XML to \\filserver\dir\file1.xml - write time: 21.4 ms - Simon Stopwatch: [sys1.batchpricer.reqgen.writeinputfile INHERIT] total 905 ms, counter 24, max 109 ms, min 10.8 ms 2010-05-28 18:02:33.626 [ResponseCallbacks-1: DriverJobSpace$TakeJobRunner$1] ERROR c.c.s.s.D.CalculatorCallback - Id:23 no deal found !! 2010-0505/28/10 18:02:50.267 INFO: [ServiceEvent] CompletedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23:Total:24 Now comparing back to older log files, it seems the problem didn't occur when using log4j as the logging backend. So logback must be doing something different. The problem seems to be that although PrintStream.write(byte buf[], int off, int len) is synchronized, however I can see in ch.qos.logback.core.joran.spi.ConsoleTarget that System.out.write(int b) is the only write method called. So inbetween logback outputting each byte, the thirdparty library is managing to write a whole string to the stdout. (Not only is this cause me a problem, but it must also be a little inefficient?) Is there any other fix to this interleaving problem than patching the code to ConsoleTarget so it implments the other write methods? Any nice work arounds. Or should I just file a bug report? Here is my logback.xml: <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%-16thread] %-5level %-35.35logger{30} - %msg%n</pattern> </encoder> </appender> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </configuration> I'm using logback 0.9.20 with java 1.6.0_07.

    Read the article

  • Matplotlib plotting non uniform data in 3D surface

    - by Raj Tendulkar
    I have a simple code to plot the points in 3D for Matplotlib as below - from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt import csv fig = plt.figure() ax = fig.add_subplot(111, projection='3d') my_data = genfromtxt('points1.csv', delimiter=',') points1X = my_data[:,0] points1Y = my_data[:,1] points1Z = my_data[:,2] ## I remove the header of the CSV File. points1X = np.delete(points1X, 0) points1Y = np.delete(points1Y, 0) points1Z = np.delete(points1Z, 0) # Convert the array to 1D array points1X = np.reshape(points1X,points1X.size) points1Y = np.reshape(points1Y,points1Y.size) points1Z = np.reshape(points1Z,points1Z.size) my_data = genfromtxt('points2.csv', delimiter=',') points2X = my_data[:,0] points2Y = my_data[:,1] points2Z = my_data[:,2] ## I remove the header of the CSV File. points2X = np.delete(points2X, 0) points2Y = np.delete(points2Y, 0) points2Z = np.delete(points2Z, 0) # Convert the array to 1D array points2X = np.reshape(points2X,points2X.size) points2Y = np.reshape(points2Y,points2Y.size) points2Z = np.reshape(points2Z,points2Z.size) ax.plot(points1X, points1Y, points1Z, 'd', markersize=8, markerfacecolor='red', label='points1') ax.plot(points2X, points2Y, points2Z, 'd', markersize=8, markerfacecolor='blue', label='points2') plt.show() My problem is that I tried to make a decent surface plot out of these data points that I have. I already tried to use ax.plot_surface() function to make it look nice. For this I eliminated some points, and recalculated the matrix kind of input needed by this function. However, the graph I generated was far more difficult to interpret and understand. So there might be 2 possibilities: either I am not using the function correctly, or otherwise, the data I am trying to plot is not good for the surface plot. What I was expecting was 3D graph which would have an effect similar to that we have of 3D pie chart. We see that one piece (that which is extracted out) is part of another piece. I was not expecting it to be exactly same like that, but some kind of effect like that. What I would like to ask is: Do you think it will be possible to make such 3D graph? Is there any way better, I could express my data in 3 dimension? Here are the 2 files - points1.csv Dim1,Dim2,Dim3 3,8,1 3,8,2 3,8,3 3,8,4 3,8,5 3,9,1 3,9,2 3,9,3 3,9,4 3,9,5 3,10,1 3,10,2 3,10,3 3,10,4 3,10,5 3,11,1 3,11,2 3,11,3 3,11,4 3,11,5 3,12,1 3,12,2 3,13,1 3,13,2 3,14,1 3,14,2 3,15,1 3,15,2 3,16,1 3,16,2 3,17,1 3,17,2 3,18,1 3,18,2 4,8,1 4,8,2 4,8,3 4,8,4 4,8,5 4,9,1 4,9,2 4,9,3 4,9,4 4,9,5 4,10,1 4,10,2 4,10,3 4,10,4 4,10,5 4,11,1 4,11,2 4,11,3 4,11,4 4,11,5 4,12,1 4,13,1 4,14,1 4,15,1 4,16,1 4,17,1 4,18,1 5,8,1 5,8,2 5,8,3 5,8,4 5,8,5 5,9,1 5,9,2 5,9,3 5,9,4 5,9,5 5,10,1 5,10,2 5,10,3 5,10,4 5,10,5 5,11,1 5,11,2 5,11,3 5,11,4 5,11,5 5,12,1 5,13,1 5,14,1 5,15,1 5,16,1 5,17,1 5,18,1 6,8,1 6,8,2 6,8,3 6,8,4 6,8,5 6,9,1 6,9,2 6,9,3 6,9,4 6,9,5 6,10,1 6,11,1 6,12,1 6,13,1 6,14,1 6,15,1 6,16,1 6,17,1 6,18,1 7,8,1 7,8,2 7,8,3 7,8,4 7,8,5 7,9,1 7,9,2 7,9,3 7,9,4 7,9,5 and points2.csv Dim1,Dim2,Dim3 3,12,3 3,12,4 3,12,5 3,13,3 3,13,4 3,13,5 3,14,3 3,14,4 3,14,5 3,15,3 3,15,4 3,15,5 3,16,3 3,16,4 3,16,5 3,17,3 3,17,4 3,17,5 3,18,3 3,18,4 3,18,5 4,12,2 4,12,3 4,12,4 4,12,5 4,13,2 4,13,3 4,13,4 4,13,5 4,14,2 4,14,3 4,14,4 4,14,5 4,15,2 4,15,3 4,15,4 4,15,5 4,16,2 4,16,3 4,16,4 4,16,5 4,17,2 4,17,3 4,17,4 4,17,5 4,18,2 4,18,3 4,18,4 4,18,5 5,12,2 5,12,3 5,12,4 5,12,5 5,13,2 5,13,3 5,13,4 5,13,5 5,14,2 5,14,3 5,14,4 5,14,5 5,15,2 5,15,3 5,15,4 5,15,5 5,16,2 5,16,3 5,16,4 5,16,5 5,17,2 5,17,3 5,17,4 5,17,5 5,18,2 5,18,3 5,18,4 5,18,5 6,10,2 6,10,3 6,10,4 6,10,5 6,11,2 6,11,3 6,11,4 6,11,5 6,12,2 6,12,3 6,12,4 6,12,5 6,13,2 6,13,3 6,13,4 6,13,5 6,14,2 6,14,3 6,14,4 6,14,5 6,15,2 6,15,3 6,15,4 6,15,5 6,16,2 6,16,3 6,16,4 6,16,5 6,17,2 6,17,3 6,17,4 6,17,5 6,18,2 6,18,3 6,18,4 6,18,5 7,10,1 7,10,2 7,10,3 7,10,4 7,10,5 7,11,1 7,11,2 7,11,3 7,11,4 7,11,5 7,12,1 7,12,2 7,12,3 7,12,4 7,12,5 7,13,1 7,13,2 7,13,3 7,13,4 7,13,5 7,14,1 7,14,2 7,14,3 7,14,4 7,14,5 7,15,1 7,15,2 7,15,3 7,15,4 7,15,5 7,16,1 7,16,2 7,16,3 7,16,4 7,16,5 7,17,1 7,17,2 7,17,3 7,17,4 7,17,5 7,18,1 7,18,2 7,18,3 7,18,4 7,18,5

    Read the article

  • Need some suggestions on my softwares architecture. [Code review]

    - by Sergio Tapia
    I'm making an open source C# library for other developers to use. My key concern is ease of use. This means using intuitive names, intuitive method usage and such. This is the first time I've done something with other people in mind, so I'm really concerned about the quality of the architecture. Plus, I wouldn't mind learning a thing or two. :) I have three classes: Downloader, Parser and Movie I was thinking that it would be best to only expose the Movie class of my library and have Downloader and Parser remain hidden from invocation. Ultimately, I see my library being used like this. using FreeIMDB; public void Test() { var MyMovie = Movie.FindMovie("The Matrix"); //Now MyMovie would have all it's fields set and ready for the big show. } Can you review how I'm planning this, and point out any wrong judgement calls I've made and where I could improve. Remember, my main concern is ease of use. Movie.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { public class Movie { public Image Poster { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Rating { get; set; } public string Director { get; set; } public List<string> Writers { get; set; } public List<string> Genres { get; set; } public string Tagline { get; set; } public string Plot { get; set; } public List<string> Cast { get; set; } public string Runtime { get; set; } public string Country { get; set; } public string Language { get; set; } public Movie FindMovie(string Title) { Movie film = new Movie(); Parser parser = Parser.FromMovieTitle(Title); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } public Movie FindKnownMovie(string ID) { Movie film = new Movie(); Parser parser = Parser.FromMovieID(ID); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } } } Parser.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using HtmlAgilityPack; namespace FreeIMDB { /// <summary> /// Provides a simple, and intuitive way for searching for movies and actors on IMDB. /// </summary> class Parser { private Downloader downloader = new Downloader(); private HtmlDocument Page; #region "Page Loader Events" private Parser() { } public static Parser FromMovieTitle(string MovieTitle) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindMovie(MovieTitle); return newParser; } public static Parser FromActorName(string ActorName) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindActor(ActorName); return newParser; } public static Parser FromMovieID(string MovieID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownMovie(MovieID); return newParser; } public static Parser FromActorID(string ActorID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownActor(ActorID); return newParser; } #endregion #region "Page Parsing Methods" public string Poster() { //Logic to scrape the Poster URL from the Page element of this. return null; } public string Title() { return null; } public DateTime ReleaseDate() { return null; } #endregion } } ----------------------------------------------- Do you guys think I'm heading towards a good path, or am I setting myself up for a world of hurt later on? My original thought was to separate the downloading, the parsing and the actual populating to easily have an extensible library. Imagine if one day the website changed its HTML, I would then only have to modifiy the parsing class without touching the Downloader.cs or Movie.cs class. Thanks for reading and for helping!

    Read the article

  • Why does this Java code not utilize all CPU cores?

    - by ReneS
    The attached simple Java code should load all available cpu core when starting it with the right parameters. So for instance, you start it with java VMTest 8 int 0 and it will start 8 threads that do nothing else than looping and adding 2 to an integer. Something that runs in registers and not even allocates new memory. The problem we are facing now is, that we do not get a 24 core machine loaded (AMD 2 sockets with 12 cores each), when running this simple program (with 24 threads of course). Similar things happen with 2 programs each 12 threads or smaller machines. So our suspicion is that the JVM (Sun JDK 6u20 on Linux x64) does not scale well. Did anyone see similar things or has the ability to run it and report whether or not it runs well on his/her machine (= 8 cores only please)? Ideas? I tried that on Amazon EC2 with 8 cores too, but the virtual machine seems to run different from a real box, so the loading behaves totally strange. package com.test; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class VMTest { public class IntTask implements Runnable { @Override public void run() { int i = 0; while (true) { i = i + 2; } } } public class StringTask implements Runnable { @Override public void run() { int i = 0; String s; while (true) { i++; s = "s" + Integer.valueOf(i); } } } public class ArrayTask implements Runnable { private final int size; public ArrayTask(int size) { this.size = size; } @Override public void run() { int i = 0; String[] s; while (true) { i++; s = new String[size]; } } } public void doIt(String[] args) throws InterruptedException { final String command = args[1].trim(); ExecutorService executor = Executors.newFixedThreadPool(Integer.valueOf(args[0])); for (int i = 0; i < Integer.valueOf(args[0]); i++) { Runnable runnable = null; if (command.equalsIgnoreCase("int")) { runnable = new IntTask(); } else if (command.equalsIgnoreCase("string")) { runnable = new StringTask(); } Future<?> submit = executor.submit(runnable); } executor.awaitTermination(1, TimeUnit.HOURS); } public static void main(String[] args) throws InterruptedException { if (args.length < 3) { System.err.println("Usage: VMTest threadCount taskDef size"); System.err.println("threadCount: Number 1..n"); System.err.println("taskDef: int string array"); System.err.println("size: size of memory allocation for array, "); System.exit(-1); } new VMTest().doIt(args); } }

    Read the article

  • Only Execute Code on Certain Requests Java

    - by BillPull
    I am building a little API for class and the teacher supplied us with a link to a tutorial that provided a simple webserver that implements Runnable. I have already written some code that will parse arguments the arguments ( or at least get me the request string ) and some code that will return some simple xml. however I think certain requests like the one for the favicon are sent I think it is messing up my code. I wrapped that in an if else but it does not seem to be working. package server; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.util.*; import java.io.*; import java.net.*; import parkinglots.*; public class WorkerRunnable implements Runnable{ protected Socket clientSocket = null; protected String serverText = null; public WorkerRunnable(Socket clientSocket, String serverText) { this.clientSocket = clientSocket; this.serverText = serverText; } public Boolean authenticateAPI(String key){ //Authenticate Key against Stored Keys //TODO: Create Stored Keys and Compare return true; } public void run() { try { InputStream input = clientSocket.getInputStream(); OutputStream output = clientSocket.getOutputStream(); long time = System.currentTimeMillis(); //TODO: Parse args and output different formats and Authentication //Parse URL Arguments BufferedReader in = new BufferedReader( new InputStreamReader(clientSocket.getInputStream(), "8859_1")); String request = in.readLine(); //Server gets Favicon Request so skip that and goto args System.out.println(request); if ( request != "GET /favicon.ico HTTP/1.1" && request != "GET / HTTP/1.1" && request != null ){ String format = "", apikey =""; System.out.println("I am Here"); String request_location = request.split(" ")[1]; String request_args = request_location.replace("/",""); request_args = request_args.replace("?",""); String[] queries = request_args.split("&"); System.out.println(queries[0]); for ( int i = 0; i < queries.length; i++ ){ if( queries[i] == "format" ){ format = queries[i].split("=")[1]; } else if( queries[i] == "apikey" ){ apikey = queries[i].split("=")[1]; } } if( apikey == "" ){ apikey = "None"; } if( format == "" ){ format = "xml"; } Boolean auth = authenticateAPI(apikey); if ( auth ){ if ( format == "xml"){ // Retrieve XML Document String xml = LotFromDB.getParkingLotXML(); output.write((xml).getBytes()); }else{ //Retrieve JSON String json = LotFromDB.getParkingLotJSON(); output.write((json).getBytes()); } }else{ output.write(("Access Denied - User is Not Authenticated").getBytes()); } }else{ output.write(("Access Denied Must Pass API Key").getBytes()); } output.close(); input.close(); System.out.println("Request processed: " + time); } catch (IOException e) { //report exceptions e.printStackTrace(); } } } Console output I get I am Here format=json Request processed: 1333516648331 GET /favicon.ico HTTP/1.1 I am Here favicon.ico Request processed: 1333516648332 It always returns the XML as well. This is my first exposure to writing a web server and dealing with networking in Java, which frustrates me a lot in general, So any suggestions here are very appreciated.

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Customize the SimpleMembership in ASP.NET MVC 4.0

    - by thangchung
    As we know, .NET 4.5 have come up to us, and come along with a lot of new interesting features as well. Visual Studio 2012 was also introduced some days ago. They made us feel very happy with cool improvement along with us. Performance when loading code editor is very good at the moment (immediate after click on the solution). I explore some of cool features at these days. Some of them like Json.NET integrated in ASP.NET MVC 4.0, improvement on asynchronous action, new lightweight theme on Visual Studio, supporting very good on mobile development, improvement on authentication… I reviewed them, and found out that in this version of .NET Microsoft was not only developed new feature that suggest from community but also focused on improvement performance of existing features or components. Besides that, they also opened source more projects, like Entity Framework, Reactive Extensions, ASP.NET Web Stack… At the moment, I feel Microsoft want to open source more and more their projects. Today, I am going to dive in deep on new SimpleMembership model. It is really good because in this security model, Microsoft actually focus on development needs. As we know, in the past, they introduce some of provider supplied for coding security like MembershipProvider, RoleProvider… I don’t need to talk but everyone that have ever used it know that they were actually hard to use, and not easy to maintain and unit testing. Why? Because every time you inherit it, you need to override all methods inside it. Some people try to abstract it by introduce more method with virtual keyword, and try to implement basic behavior, so in the subclass we only need to override the method that need for their business. But to me, it’s only the way to work around. ASP.NET team and Web Matrix knew about it, so they built the new features based on existing components on .NET framework. And one of component that comes to us is SimpleMembership and SimpleRole. They implemented the Façade pattern on the top of those, and called it is WebSecurity. In the web, we can call WebSecurity anywhere we want, and make a call to inside wrapper of it. I read a lot of them on web blog, on technical news, on MSDN as well. Matthew Osborn had an excellent article about it at his blog. Jon Galloway had an article like this at here. He analyzed why old membership provider not fixed well to ASP.NET MVC and how to get over it. Those are very good to me. It introduced to me about how to doing SimpleMembership on it, how to doing it on new ASP.NET MVC web application. But one thing, those didn’t tell me was how to doing it on existing security model (that mean we already had Users and Roles on legacy system, and how we can integrate it to this system), that’s a reason I will introduce it today. I have spent couples of hours to see what’s inside this, and try to make one example to clarify my concern. And it’s lucky that I can make it working well.The first thing, we need to create new ASP.NET MVC application on Visual Studio 2012. We need to choose Internet type for this web application. ASP.NET MVC actually creates all needs components for the basic membership and basic role. The cool feature is DoNetOpenAuth come along with it that means we can log-in using facebook, twitter or Windows Live if you want. But it’s only for LocalDb, so we need to change it to fix with existing database model on SQL Server. The next step we have to make SimpleMembership can understand which database we use and show it which column need to point to for the ID and UserName. I really like this feature because SimpleMembership on need to know about the ID and UserName, and they don’t care about rest of it. I assume that we have an existing database model like So we will point it in code like The codes for it, we put on InitializeSimpleMembershipAttribute like [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]     public sealed class InitializeSimpleMembershipAttribute : ActionFilterAttribute     {         private static SimpleMembershipInitializer _initializer;         private static object _initializerLock = new object();         private static bool _isInitialized;         public override void OnActionExecuting(ActionExecutingContext filterContext)         {             // Ensure ASP.NET Simple Membership is initialized only once per app start             LazyInitializer.EnsureInitialized(ref _initializer, ref _isInitialized, ref _initializerLock);         }         private class SimpleMembershipInitializer         {             public SimpleMembershipInitializer()             {                 try                 {                     WebSecurity.InitializeDatabaseConnection("DefaultDb", "User", "Id", "UserName", autoCreateTables: true);                 }                 catch (Exception ex)                 {                     throw new InvalidOperationException("The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588", ex);                 }             }         }     }And decorating it in the AccountController as below [Authorize]     [InitializeSimpleMembership]     public class AccountController : ControllerIn this case, assuming that we need to override the ValidateUser to point this to existing User database table, and validate it. We have to add one more class like public class CustomAdminMembershipProvider : SimpleMembershipProvider     {         // TODO: will do a better way         private const string SELECT_ALL_USER_SCRIPT = "select * from [dbo].[User]private where UserName = '{0}'";         private readonly IEncrypting _encryptor;         private readonly SimpleSecurityContext _simpleSecurityContext;         public CustomAdminMembershipProvider(SimpleSecurityContext simpleSecurityContext)             : this(new Encryptor(), new SimpleSecurityContext("DefaultDb"))         {         }         public CustomAdminMembershipProvider(IEncrypting encryptor, SimpleSecurityContext simpleSecurityContext)         {             _encryptor = encryptor;             _simpleSecurityContext = simpleSecurityContext;         }         public override bool ValidateUser(string username, string password)         {             if (string.IsNullOrEmpty(username))             {                 throw new ArgumentException("Argument cannot be null or empty", "username");             }             if (string.IsNullOrEmpty(password))             {                 throw new ArgumentException("Argument cannot be null or empty", "password");             }             var hash = _encryptor.Encode(password);             using (_simpleSecurityContext)             {                 var users =                     _simpleSecurityContext.Users.SqlQuery(                         string.Format(SELECT_ALL_USER_SCRIPT, username));                 if (users == null && !users.Any())                 {                     return false;                 }                 return users.FirstOrDefault().Password == hash;             }         }     }SimpleSecurityDataContext at here public class SimpleSecurityContext : DbContext     {         public DbSet<User> Users { get; set; }         public SimpleSecurityContext(string connStringName) :             base(connStringName)         {             this.Configuration.LazyLoadingEnabled = true;             this.Configuration.ProxyCreationEnabled = false;         }         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             base.OnModelCreating(modelBuilder);                          modelBuilder.Configurations.Add(new UserMapping());         }     }And Mapping for User as below public class UserMapping : EntityMappingBase<User>     {         public UserMapping()         {             this.Property(x => x.UserName);             this.Property(x => x.DisplayName);             this.Property(x => x.Password);             this.Property(x => x.Email);             this.ToTable("User");         }     }One important thing, you need to modify the web.config to point to our customize SimpleMembership <membership defaultProvider="AdminMemberProvider" userIsOnlineTimeWindow="15">       <providers>         <clear/>         <add name="AdminMemberProvider" type="CIK.News.Web.Infras.Security.CustomAdminMembershipProvider, CIK.News.Web.Infras" />       </providers>     </membership>     <roleManager enabled="false">       <providers>         <clear />         <add name="AdminRoleProvider" type="CIK.News.Web.Infras.Security.AdminRoleProvider, CIK.News.Web.Infras" />       </providers>     </roleManager>The good thing at here is we don’t need to modify the code on AccountController. We only need to modify on SimpleMembership and Simple Role (if need). Now build all solutions, run it. We should see a screen like thisIf I login to Twitter button at the bottom of this page, we will be transfer to twitter authentication pageYou have to waiting for a moment Afterwards it will transfer you back to your admin screenYou can find all source codes at my MSDN code. I will really happy if you guys feel free to put some comments as below. It will be helpful to improvement my code in the future. Thank for all your readings. 

    Read the article

  • Setup Custom Portal & Content Enabled Domain

    - by Stefan Krantz
    When overlooking the past year we have seen a large increase in deployments where only some parts of the WebCenter Suite infrastructure has been used. The most common from my personal perspective is a domain topology that includes: WebCenter Custom Portal, WebCenter Content and Oracle HTTP ServicesToday its very common to see installation where the whole suite is installed when the use case only requires the custom portal and some sub component like WebCenter Content. This post will go into detail on how to minimize the deployment time and effort by only laying down the necessary managed servers needed, by following this proposed method you will minimize the configuration steps and only install the required components and schema's, configure only the necessary components and minimize the impact of architectural changes through reduced dependencies. Assumptions: Oracle 11g Database installed SYS or equivalent access to Database to setup schema's via RCU Running Operating System supporting JDK 7 Update 2 (Check support matrix here) Good understanding of WebLogic Architecture Binaries: Oracle JDK 7 Update 2 (1.7.0_02) (Download) Oracle WebLogic 10.3.6 (Download) Oracle WebCenter Binaries (11.1.1.6) (Download) Oracle WebCenter Content Binaries (11.1.1.6) (Download 1) (Download 2) Oracle HTTP Services (11.1.1.6) (Download) Oracle Repository Creation Utility (11.1.1.6) (Download Linux or Windows) Schema's: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} MDS - Meta Data Services (WebCenter and OWSM) WebCenter (WebCenter Schema) OCS (Oracle WebCenter Content) Activities (WebCenter Activities) OPSS (Policy Store for WebCenter) Installation Structure: - [Installation Home]/Middleware    - Oracle_WC1 (WebCenter Installation)    - Oracle_WT1 (Oracle WebTier)    - Oracle_ECM (WebCenter Content)    - wlserver_10.3 (Weblogic installation)- [Installation Home]/domains    - webcenter (WebCenter Domain)    - instances (OHS/OPMN instance)- [Installation Home]/applications- [Installation Home]/JDK1.7.0_02 Installation and Configuration Steps: Install Java and configure Java Home Extract the Java Installable (jdk-7u2-linux-x64) to [Installation Home]/JDK1.7.0_02 Add JAVA_HOME to Environment Settings (JAVA_HOME=[Installation Home]/JDK1.7.0_02) Update PATH in Environment Settings (PATH=$JAVA_HOME/bin:$PATH) Install WebLogic Server (Middleware Home) Run the installer / execute jar file (java - jar wls1036_generic.jar) Create the Middleware Home under [Installation Home]/Middleware Install WebCenter Portal (Extend Middleware Home) Extract the compressed file (ofm_wc_generic_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_WC1) Install WebCenter Content (Extend Middleware Home) Extract the compressed files (ofm_wcc_generic_11.1.1.6.0_disk1_1of2.zip & ofm_wcc_generic_11.1.1.6.0_disk1_2of2.zip) to the same temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_ECM) Configure Initial Domain (Domain name webcenter) Execute configuration tool - [Installation Home]/Middleware/wlserver_10.3/common/bin/config Select "Create a New Weblogic Domain" Select following template (Basic Weblogic Server Domain, Oracle Enterprise Manager, Oracle WSM Policy Manager, Oracle JRF) Create new domain with name webcenter under following location ([Installation Home]/domains) for applications ([Installation Home]/applications) Select Production Mode Finish Configuration wizard Setup username for startup scripts - Add a new file called boot.properties to ([Installation Home]/domains/webcenter/servers/AdminServer/security)Add following lines to boot.propertiesusername=weblogicpassword=[password clear text, it will be encrypted during first start] Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Install and Configure Oracle WebTier (OHS Server) Extract compressed file (ofm_webtier_linux_11.1.1.6.0_64_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller) Select Install & Configure option Deselect Oracle WebCache Auto Configure Ports Configure Schema's with RCU (Repository Creation Utility) Extract compressed file (ofm_rcu_linux_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute rcu with following command ([temp]/rcuHome/rcu) Make sure database meets RCU requirements, particular (PROCESSES is 200 or more) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using SQLPLUS and sys user tou can update this configuration in the database with following procedure:ALTER SYSTEM SET PROCESSES=200 SCOPE=SPFILE shutdown immediate startup Create and Configure following schemas:MDS - Meta Data Services (WebCenter and OWSM)WebCenter (WebCenter Schema)OCS (Oracle WebCenter Content)Activities (WebCenter Activities)OPSS (Policy Store for WebCenter) Remember selected schema prefix and password (will be used later) Configure WebCenter Portal instance (WC_CustomPortal) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_WC1/common/bin/config) Select Extend an Existing WebLogic domain Select the existing webcenter domain ([Installation Home]/domains/webcenter) Select Extend my domain using existing extension templateBrowse to ([Installation Home]/Middleware/Oracle_WC1/common/templates/applications)Select oracle.wc_custom_portal_template_11.1.1.jar Select to configure (Managed Servers/Clusters/Machines) On the Managed Server Screen you can now configure 1 or more WC_CustomPortal managed servers (name them WC_CustomPortal[n] (skip numbering if not clustered)) In case of two WC_CustomPortal Servers then create a Cluster (any name) and make sure the managed servers join the new cluster Create a new machine with same name as the current machine Make sure the AdminServer and WC_CustomPortal[n] managed servers joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start WC_CustomPortal in the foreground (([Installation Home]/domains/webcenter/bin/startManagedServer WC_CustomPortal))- repeat for each WC_CustomPortal instance on the host Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/) Result should be ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/security/boot.properties) Configure WebCenter Content instance (UCM_server1) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_ECM/common/bin/config) Select Extend an Existing WebLogic domain Select Oracle Universal Content Management - Content Server Select to configure (Managed Servers/Clusters/Machines) On Managed Server Screen create only one managed server instance (UCM_server1 on port 16200 (you can select any other available port)) Make sure the UCM_server1 managed server joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start UCM_server1 in the foreground ([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1)Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/UCM_server1/ Result should be ([Installation Home]/domains/webcenter/servers/UCM_server1/security/boot.properties) Post Configure WebCenter Content instance for WebCenter Portal Open a browser where you have support for Java applets - navigate to http://host:port/cs WARNING: The page that you are presented with after authentication will only appear once for each instance WARNING: Make sure you set correct storage options - also remember to consider file sharing options if you like to cluster your Content Server instance over multiple hosts Set an appropriate Auto number prefix Update the Server Socket Port: Commonly set to (4444)  used for RIDC communication (a requirement for WebCenter Portal) Update the IP Address Filter to include the IP that is planned to access the server over RIDC - at the minimum add the ip address of the current host (this option can be updated later via EM) Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Admin Server Go to General ConfigurationCheck Enable AccountsIn Additional Configuration Variables (Add on two lines) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AllowUpdateForGenwww=1CollectionUseCache=1 Save the changes and go to Component Manager Click on the link advanced component manager Enable following components Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Folders_g, WebCenterConfigure, SiteStudio, SiteStudioExternalApplications, DBSearchContainsOpSupport WARNING: Make sure that following component is disabled: FrameworkFolders Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Site Studio Administration and update - Do not forget to save and submit each page Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Set Default ProjectSet Default WebAssets Post Configure Oracle WebTier (OHS) to include Content Server and WebCenter Portal application context Update following file - [Installation Home]/domains/instances/instance1/config/OHS/ohs1/mod_wl_ohs.conf For single add lines from following example: Link For clustered environment add lines from following template (note the clustering in example on applies to WC_CustomPortal): Link For more information on this: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/contentsvr.htm#WCEDG318 Optional - Configure JOC Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Follow instructions: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/extend_wc.htm#WCEDG264 Optional (Recommended) - Configure Node Manager Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/core.1111/e12037/node_manager.htm#WCEDG277 Optional (Mandatory for clustered environments) - Re-Associate Policy Store to Database or OID Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_credstore.htm#CFHDEDJH Optional - Configure Coherence for Content Presenter Follow instructions in Blog Post (This post is for PS4): https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enabling_coherence_for_content_presenter Other Recommended Post Cloning WebCenter Custom Portal - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/cloning_a_webcenter_portal_managedImproving WebCenter Performance through caching - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/improving_webcenter_performance

    Read the article

  • CodePlex Daily Summary for Friday, March 05, 2010

    CodePlex Daily Summary for Friday, March 05, 2010New Projects.svn Folders Cleanup Tool: dotSVN Cleanup is a tool that allows you to remove the .svn folders . Just click, browse, say abracadabra ...and the magic is done. Have fun with...Accord: The Accord framework creates an easy we to integrate any Dependency Injection framework into your project, while abstracting the details of your im...Asp.net MVC Lab: Try asp.net mvc outASP.NET Themes management with Webforms: The provided source is an example for how to use themes in ASP.NET Webforms. this source is the "up to date" support for the article I wroteB&W Port Scanner: B&W Port Scanner (formerly Net Inspector) is a fast TCP Port scan utility. The main idea is support of customizable operations to be performed f...BizTalk SWAT - Simple Web Activity Tracker: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. Some of th...C# Linear Hash Table: A C# dictionary-like implementation of a linear hash table. It is more memory efficiant than the .NET dictionary, and also almost as fast. NOTE: On...DBF Import Export Wizard: DBF Import Export Wizard is a tool for anyone needing to import DBF files into SQL Server or to export SQL Server tables to a DBF file. This proje...Domain as XML - Driven Development: Visual Studio Code Samples: Domain as XML - Driven Development: Visual Studio Code SamplesEasyDownload: This application allows to manage downloads handling an stack of files and several useful configurationsEos2: .FlightTickets: This application allows to buy flight ticketsFotofly PhotoViewer: A Silverlight control that uses the Fotofly metadata library to show the people in a photo (using Windows Live Photo Gallery People metadata) and a...Fujiy source code: Source code examplesGameSet: This application allows to play games with distributed users.Injectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been writt...Inventory: Keep track of inputs, materias and salesLoanTin.Com Source Code: LoanTin.Com - a Social Networking Website as same as Tumblr.com, based on source code of Loantiner Project, allow anyone can share anything to anyo...mysln: my solutions.NumTextBox: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 修改自:http://www.codeproject.com/KB/edit/num...Quick Performance Monitor: This small utility helps to monitor performance counters without using the full blown perfmon tool from Windows. It supports a number of command li...Runo: Runo ResearchSales: This application allows to manage a hardware storeScrewWiki Form Auth Provider: Enables your ASP.NET site to use Forms Authentication to integrate with your ScrewWiki. User management is performed on a parent site, and cookie i...SDS: Scientific DataSet library and tools: The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very com...ShapeSweeper: Minesweeper-like game for the Zune HD. Each hidden object has three properties to discover--location, color, and shape--and all three must be corre...SilverlightExcel: an Excel file viewer in Silverlight 4: SilverlightExcel is a Silverlight application allowing you to open and view Excel files and also create graphs.sPWadmin: pwAdmin is an Web Interface based on JSP that uses the PW-Java API to control an PW-Server.Video Player control in Silverlight: A control for playing video in Silverlight 4 with chapters on timeline control. This player will be easily skinnable and customizable. More Featur...XNA Light Pre Pass Renderer: A demo/sample that shows how to write a light pre pass renderer in XNA.Zimms: Collaboration Site for friends, a code depot, and scratch padNew Releases.svn Folders Cleanup Tool: dotSVN Cleanup Tool: dotSVN Cleanup Tool executableAccord: Alpha: Initial build of the Accord framework.AcPrac: AcPrac Ver 0.1: The first version of AcPrac. It is not fully functional, but rather a version to get the bugs out. Please report all bugs.ASP.NET: ASP.NET Browser Definition Files: This download contains: ASP.NET 4 Browser Definition Files -- You can use the new ASP.NET 4 browser definition files with earlier versions of ASP....B&W Port Scanner: Black`n`White Port Scanner 1.0: B&W Port Scanner 1.0 Final Release Date: 03.03.2010 Black`n`WhiteBizTalk SWAT - Simple Web Activity Tracker: BizTalk SWAT: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. It uses th...BTP Tools: CSB+CUV+HCSB dict files 2010-03-04: 5. is now missing a space between the Strong’s number and the Count: >CSB Translation: 圣所 7, 至圣所G39+G394 it should be: CSB Translation: 圣所 7, 至圣所G...C# Linear Hash Table: Linear Hash Table: First working version of the Linear Hash Table.Cassiopeia: WinTools 1.0 beta: First ReleaseComposure: Caliburn-44007-trunk-vs2010.net40: This is a very simple conversion of the Caliburn trunk (rev 44007) for use in Visual Studio 2010 RC1 built against .NET40. Because the conversion w...Cover Creator: CoverCreator 1.3.0: English and Polish version. Functionality to add image to the front page. Load / save covers.DBF Import Export Wizard: DBF Import Export Wizard Source Code: Version 0.1.0.3DBF Import Export Wizard: DBF_Import_Export_Wizard Setup 0.1.0.3: Zip file contains Setup.exeESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions v0.2: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fotofly PhotoViewer: Fotofly Photoview v0.1: The first public release. Based on a Silverlight application I have been using for over a year at www.tassography.com. This version uses Fotofly v0...HPC with GPUs applied to CG: Cuda Soft Bodies simulation: Cuda src for soft bodiesHPC with GPUs applied to CG: Full Soft Bodies src: full src code for soft bodies simulationInjectivity (Dependency Injection): 2.8.166.2135: Release 2.8.166.2135 of the Injectivity dependency injection framework.Line Counter: 1.5 (Code Outline Preview): This version contains preview of the code outline feature, you can now view C# code outline within Line Counter. Note that the code outline now onl...Micajah Mindtouch Deki Wiki Copier: MicajahWikiCopier: You should use the following line arguments: WikiCopier.exe "http://oldwikiwithdata.wik.is/@api/deki" "login" "password" "http://newwiki.somename.l...ncontrols: Alpha 0.4.0.1: Added some example on the Console Project.NumTextBox: NumTextBox初始版本: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 此为初始版本PSCodeplex: PS CodePlex 0.2: PS CodePlex 0.2 has some breaking changes to the parameters. A few of the parameters are renamed and a few are made as switch parameters. Add-Rele...Quick Performance Monitor: QPerfMon First release - Version 1.0.0: The first release of the utility.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.2: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...ScrewWiki Form Auth Provider: ScrewWiki Forms Authentication: Initial ReleaseSee.Sharper: See.Sharper.Docs-1.10.3.4: HTML documentation (including Doxygen project)See.Sharper: See.Sharper-1.10.3.4: Solution (Source files, debug and release binaries)Solar.Generic: Solar.Generic 0.8.0.0 Beta (Revised, Renamed): Solar.Generic 0.8.0.0 (Revised & Renamed) Renamed project from Solar.Commons to Solar.Generic. Project solution file is now in format of Visual ...Solar.Security: Solar.Security 1.1.0.0: Performed several major refactorings of code base. Stripped In-Memory implementation of IConfiguration interface of transactional behavior due to...sPWadmin: pwAdmin v0.7: -Star System Simulator: Star System Simulator 2.3: Changes in this release: Fixed several localisation issues. Features in this release: Model star systems in 3D. Euler-Cromer method. Improved...SysI: sysi, stable and ready: This time for sure.TheWhiteAmbit: TheWhiteAmbit - Demo: Two little demos demonstrating: - fast realtime raytracing - generating bent normals for shading (CUDA capable GPU needed = nVidia GeForce >8x00)VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 22 Beta: Build 22 (beta) New: Visual Studio 2010 RC support (VsTortoise for Visual Studio 2010 RC screenshots) New: VsTortoise integrates in to Solution E...WinMergeFS: WinMergeFS 0.1.42128alpha: WinMergeFS provides AuFS functionality for windows. With WinMergeFS users can mount multiple directories into a virtual drive. Plugin based root se...WSDLGenerator: WSDLGenerator 0.0.0.2: - Bugs fixed - Code refactored - Added support for custom typesXNA Light Pre Pass Renderer: LightPrePassRendererXNA: Zipped source code for the light pre pass renderer made with XNA.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrBlogEngine.NETSDS: Scientific DataSet library and toolsMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterMDT Web FrontEndDiffPlex - a .NET Diff Generator

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >