Search Results

Search found 22641 results on 906 pages for 'visual studio express 200'.

Page 529/906 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

  • Antenna Aligner Part 5: Devil is in the detail

    - by Chris George
    "The first 90% of a project takes 90% of the time and the last 10% takes the another 200%"  (excerpt from onista) Now that I have a working app (more or less), it's time to make it pretty and slick. I can't stress enough how useful it is to get other people using your software, and my simple app is no exception. I handed my iPhone to a couple of my colleagues at Red Gate and asked them to use it and give me feedback. Immediately it became apparent that the delay between the list page being shown and the list being drawn was too long, and everyone who tried the app clicked on the "Recalculate" button before it had finished. Similarly, selecting a transmitter heralded a delay before the compass page appeared with similar consequences. All users expected there to be some sort of feedback/spinny etc. to show them it is actually doing something. In a similar vein although for opposite reasons, clicking the Recalculate button did indeed recalculate the available transmitters and redraw them, but it did this too fast! One or two users commented that they didn't know if it had done anything. All of these issues resulted in similar solutions; implement a waiting spinny. Thankfully, jquery mobile has one built in, primarily used for ajax operations. Not wishing to bore you with the many many iterations I went through trying to get this to work, I'll just give you my solution! (Seriously, I was working on this most evenings for at least a week!) The final solution for the recalculate problem came in the form of the code below. $(document).on("click", ".show-page-loading-msg", function () {            var $this = $(this),                theme = $this.jqmData("theme") ||                        $.mobile.loadingMessageTheme;            $.mobile.showPageLoadingMsg(theme, "recalculating", false);            setTimeout(function ()                           { $.mobile.hidePageLoadingMsg(); }, 2000);            getLocationData();        })        .on("click", ".hide-page-loading-msg", function () {              $.mobile.hidePageLoadingMsg();        }); The spinny is activated by setting the class of a button (for example) to the 'show-page-loading-msg' class. &lt;a data-role="button" class="show-page-loading-msg"Recalculate This means the code above is fired, calling the showPageLoadingMsg on the document.mobile object. Then, after a 2 second timeout, it calls the hidePageLoadingMsg() function. Supposedly, it should show "recalculating" underneath the spinny, but I've not got that to work. I'm wondering if there is a problem with the jquery mobile implementation. Anyway, it doesn't really matter, it's the principle I'm after, and I now have spinnys!

    Read the article

  • XMLHttpRequest not working, trying to test database connection [closed]

    - by Frederick Marcoux
    I'm currently creating my own CMS for personnal use but I'm blocked at a code. I'm trying to make a installation script but the AJAX request to test if database works, doesn't work... There's my JS code: function testDB() { "use strict"; var host = document.getElementById('host').value; var username = document.getElementById('username').value; var password = document.getElementById('password').value; var db = document.getElementById('db_name').value; var xmlhttp = new XMLHttpRequest(); var url = "test_db.php"; var params = "host="+host+"&username="+username+"&password="+password+"&db="+db; xmlhttp.open("POST", url, true); xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", params.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.send(params); $('#loader').removeAttr('style'); if (xmlhttp.responseText !== '') { if (xmlhttp.readyState===4 && xmlhttp.status===200) { $('#next').removeAttr('disabled'); $('#test').attr('disabled', 'disabled'); $('#test').text('Connection Successful!'); $('#test').addClass('btn-success'); $('#login').addClass('success'); $('#login1').addClass('success'); $('#db').addClass('success'); $('#loader').attr('style', 'display: none;'); } else { $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } else { $('#next').attr('disabled', 'disabled'); $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } And there's my PHP code: <?php $link = mysql_connect($_POST['host'], $_POST['username'], $_POST['password']); if (!$link) { echo ''; } else { if (mysql_select_db($_POST['db'])) { echo 'Connection Successful!'; } else { echo ''; } } mysql_close($link); ?> I don't know why it doesn't work but I tried with JQuery $.ajax, $.get, $.post but nothing work...

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • change texture at runtime

    - by user1509674
    How can I change the texture at runtime. I have done changing the label(text) at runtime. The following code is used using UnityEngine; using System.Collections; public class switchtime : MonoBehaviour { // Use this for initialization private bool showLabel = false; private bool showLabe2 = false; private bool showLabe3 = false; private bool showLabe4 = false; public void Start() { Invoke("ToggleLabel", 1); Invoke("ToggleLabel2", 3); Invoke("ToggleLabel3",6 ); Invoke("ToggleLabel4", 9); } public void ToggleLabel() { showLabel = !showLabel; } public void ToggleLabel2() { showLabe2 = !showLabe2; } public void ToggleLabel3() { showLabe3 = !showLabe3; } public void ToggleLabel4() { showLabe4 = !showLabe4; } public void OnGUI() { if (showLabel) { GUI.Label(new Rect(300, 200, 100, 20), "Copying window file.."); } if (showLabe2) { GUI.Label(new Rect(40, 40, 100, 20), "Epanding windows file.."); } if (showLabe3) { GUI.Label(new Rect(80, 80, 100, 20), "Installing Feature.."); } if (showLabe4) { GUI.Label(new Rect(100, 100, 100, 20), "Installing Updates"); } } } Now I need to change the GUITexture at runtime. How can do this? Can anybody help me in coding? Here is some changes I made. I worked on changing the gameObject when mouse is placed on it. This is the code I applied and its working for me using UnityEngine; using System.Collections; public class change : MonoBehaviour { // Use this for initialization public GameObject newSprite; private Vector3 currentSpritePosition; void update() { } void Start() { newSprite.renderer.enabled = false; currentSpritePosition = transform.position; //then make it invisible renderer.enabled = false; //give the new sprite the position of the latter newSprite.transform.position = currentSpritePosition; //then make it visible newSprite.renderer.enabled = true; } void OnMouseExit(){ //just the reverse process renderer.enabled = true; newSprite.renderer.enabled = false; } } Now the problem is that, in same code I need to set a time so that the text get loaded without the mouse click. I mean a particular time to load each text one after the another. Here is my screen shot Here I have placed the image one below the another,when the mouse is hovered on it it will change the text from bold to normal text.Changes process stated in (image 2,3).That code I posted is working for me.

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • ??????????

    - by Allen Gao
    ???????RAC??????????????????10gR2?11gR1.??????????????CRS???????1.ocssd : ???????????(Node Monitoring)????(Group Management),??CRS????????????????????????,????????????(network heartbeat)?????(disk heartbeat)???,?????????????????????,????????????,??????????????????,?????node kill escalation(???11gR1????????),??????????????????????????(reboot time,???3?)????????:ocssd.bin??????????????????????????,??????????????????????????????,misscount(???30?,??????????????600?),????????????,???????????????????,?????????????2???,??????,??????????????,?????????????????????:ocssd.bin?????????????(Voting File)??????????,?????????????????????????????,disk timeou(???200?),?????????????????????,CRS???[N/2]+1????????,??N??????,??????2.oclsomon:????????ocssd????,????ocssd.bin??????,???????3.oprocd:??????Linux?Unix??,????????????????????????????????,?????????:????????????init.cssd?????????????????????????1.??????2.<crs???>/log/<????>/cssd/ocssd.log3.oprocd.log(/etc/oracle/oprocd/*.log.* ? /var/opt/oracle/oprocd/*.log.*)4.<crs???>/log/<????>/cssd/oclsomon/oclsomon.log5. Oracle OSWatcher ????????????????????1.?ocssd???????????ocssd.log???????,????????????????????????????????,???????,OSW??(traceroute???),???????(cluster interconnect)??????,?????????[ CSSD]2012-03-02 23:56:18.749 [3086] >WARNING: clssnmPollingThread: node <node_name> at 50% heartbeat fatal, eviction in 14.494 seconds[ CSSD]2012-03-02 23:56:25.749 [3086] >WARNING: clssnmPollingThread: node <node_name> at 75% heartbeat fatal, eviction in 7.494 seconds[ CSSD]2012-03-02 23:56:32.749 [3086] >WARNING: clssnmPollingThread: node <node_name>at 90% heartbeat fatal, eviction in 0.494 seconds[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmPollingThread: Eviction started for node <node_name>, flags 0x040d, state 3, wt4c 0[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmDiscHelper: <node_name>, node(4) connection failed, con (1128a5530), probe(0)[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmDiscHelper: node 4 clean up, con (1128a5530), init state 5, cur state 5[CSSD]2012-03-02 23:56:33.243 [3600] >TRACE:   clssnmDoSyncUpdate: Initiating sync 196446491[CSSD]2012-03-02 23:56:33.243 [3600] >TRACE:   clssnmDoSyncUpdate: diskTimeout set to (27000)ms??:???????ocssd.log?????????????????????,?????????????????????ocssd.log???????,??????????????????????????????,OSWatcher??(iostat???),???i/o????????,?????????2010-08-13 18:34:37.423: [    CSSD][150477728]clssnmvDiskOpen: Opening /dev/sdb82010-08-13 18:34:37.423: [    CLSF][150477728]Opened hdl:0xf4336530 for dev:/dev/sdb8:2010-08-13 18:34:37.429: [   SKGFD][150477728]ERROR: -9(Error 27072, OS Error (Linux Error: 5: Input/output errorAdditional information: 4Additional information: 720913Additional information: -1))2010-08-13 18:34:37.429: [    CSSD][150477728](:CSSNM00060: )clssnmvReadBlocks: read failed at offset 17 of /dev/sdb82010-08-13 18:34:38.205: [    CSSD][4110736288](:CSSNM00058: )clssnmvDiskCheck: No I/O completions for 200880 ms for voting file /dev/sdb8)2010-08-13 18:34:38.206: [    CSSD][4110736288](:CSSNM00018: )clssnmvDiskCheck: Aborting, 0 of 1 configured voting disks available, need 12010-08-13 18:34:38.206: [    CSSD][4110736288]###################################2010-08-13 18:34:38.206: [    CSSD][4110736288]clssscExit: CSSD aborting from thread clssnmvDiskPingMonitorThread 2010-08-13 18:34:38.206: [    CSSD][4110736288]###################################2. ?oclsomon???????????oclsomon.log ?????,??????????ocssd????,??ocssd??????(RT)???,?????????????(?cpu)??,?????????????,OSW??(vmstat,top???),?????????3.?oprocd???????????oprocd?????????,?????????oprocd????? Dec 21 16:15:30.369857 | LASTGASP | AlarmHandler:  timeout(2312 msec) exceeds interval(1000 msec)+margin(500 msec).   Rebooting NOW.??oprocd?????????????????????,?????ntp(?????????),??diagwait=13 ????????,??,?????????????,??????CRS,???????????????,??????????????oprocd????,??,?????OSWatcher??(vmstat,top???),??????????????????????????????,????????????????? ???????,??????MOS ???Note 265769.1 :Troubleshooting 10g and 11.1 Clusterware RebootsNote 1050693.1 :Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • PC hangs and reboots from time to time

    - by Bevor
    Hello, I have a very strange problem: Since I have my new PC, I have always had problems with it. From time to time the computer freezes for some seconds and suddendly reboots by itself. I've had this problem since Ubuntu 9.10. The same with 10.04 and 10.10. That's why I don't think it's a software failure because the problem persist too long. It doesn't have anything to do with what I'm doing at this time. Sometimes I listen to music, sometimes I only use Firefox, sometimes I'm running 2 or 3 VMs, sometimes I watch DVD. So it's not isolatable. I could freeze once a day or once a week. I put the PC to the vendor twice(!). The first time they changed my power supply but the problem persisted. The second time they told me that they made some heavy performance tests 50 hours long but they didn't find anything. (How can that be that I have daily freezes with normal usage). The vendor didn't check the hard discs because they used their own disc with Windows. (So they never checked the Linux installation). Yesterday I made some intensive hard disc scans with "SMART" but no errors were found. I ran memtest for 3 times but no errors found. I already had this problem in my old flat, so I doubt that I has something to do with current fluctuation. I already tried another electrical socket and changed to connector strip but the problem persists. At the moment I removed 2 of the RAMs (2x 2GB). In all I have 6GB, 2x2GB and 2x1GB. Could this difference maybe be a problem? Here is a list of my components. I hope that anybody find something I didn't think about yet. And here a list of my components: 1x AMD Phenom II X4 965 Black Edition, 3,4Ghz, Quad Core, S-AM3, Boxed 2x DDR3-RAM 2048MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x DDR3-RAM 1024MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x SATA II Seagate Barracuda 7200.12, 1TB 32MB Cache = RAID 1 1x DVD ROM SATA LG DH16NSR, 16x/52x 1x DVD-+R/-+RW SATA LG GH-22NS50 1x Cardreader 18in1 1x PCI-E 2.0 GeForce GTS 250, Retail, 1024MB 1x Power Supply ATX 400 Watt, CHIEFTEC APS-400S, 80 Plus 1x Network card PCI Intel PRO/1000GT 10/100/1000 MBit 1x Mainboard Socket-AM3 ASUS M4A79XTD EVO, ATX lshw: description: Desktop Computer product: System Product Name vendor: System manufacturer version: System Version serial: System Serial Number width: 64 bits capabilities: smbios-2.5 dmi-2.5 vsyscall64 vsyscall32 configuration: boot=normal chassis=desktop uuid=80E4001E-8C00-002C-AA59-E0CB4EBAC29A *-core description: Motherboard product: M4A79XTD EVO vendor: ASUSTeK Computer INC. physical id: 0 version: Rev X.0X serial: MT709CK11101196 slot: To Be Filled By O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 0704 (11/25/2009) size: 64KiB capacity: 960KiB capabilities: isa pci pnp apm upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot zipboot biosbootspecification *-cpu description: CPU product: AMD Phenom(tm) II X4 965 Processor vendor: Advanced Micro Devices [AMD] physical id: 4 bus info: cpu@0 version: AMD Phenom(tm) II X4 965 Processor serial: To Be Filled By O.E.M. slot: AM3 size: 800MHz capacity: 3400MHz width: 64 bits clock: 200MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 512KiB capacity: 512KiB capabilities: pipeline-burst internal varies data *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 2MiB capacity: 2MiB capabilities: pipeline-burst internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: pipeline-burst internal varies unified *-memory description: System Memory physical id: 36 slot: System board or motherboard size: 2GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber00 vendor: Manufacturer00 physical id: 0 serial: SerNum00 slot: DIMM0 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber01 vendor: Manufacturer01 physical id: 1 serial: SerNum01 slot: DIMM1 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: DIMM [empty] product: ModulePartNumber02 vendor: Manufacturer02 physical id: 2 serial: SerNum02 slot: DIMM2 *-bank:3 description: DIMM [empty] product: ModulePartNumber03 vendor: Manufacturer03 physical id: 3 serial: SerNum03 slot: DIMM3 *-pci:0 description: Host bridge product: RD780 Northbridge only dual slot PCI-e_GFX and HT1 K8 part vendor: ATI Technologies Inc physical id: 100 bus info: pci@0000:00:00.0 version: 00 width: 32 bits clock: 66MHz *-pci:0 description: PCI bridge product: RD790 PCI to PCI bridge (external gfx0 port A) vendor: ATI Technologies Inc physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:a000(size=4096) memory:f8000000-fbbfffff ioport:d0000000(size=268435456) *-display description: VGA compatible controller product: G92 [GeForce GTS 250] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:ac00(size=128) memory:fbbe0000-fbbfffff *-pci:1 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port C) vendor: ATI Technologies Inc physical id: 6 bus info: pci@0000:00:06.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:b000(size=4096) memory:fbc00000-fbcfffff ioport:f6f00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 03 serial: e0:cb:4e:ba:c2:9a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:45 ioport:b800(size=256) memory:f6fff000-f6ffffff memory:f6ff8000-f6ffbfff memory:fbcf0000-fbcfffff *-pci:2 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port D) vendor: ATI Technologies Inc physical id: 7 bus info: pci@0000:00:07.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:c000(size=4096) memory:fbd00000-fbdfffff *-firewire description: FireWire (IEEE 1394) product: VT6315 Series Firewire Controller vendor: VIA Technologies, Inc. physical id: 0 bus info: pci@0000:03:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress ohci bus_master cap_list configuration: driver=firewire_ohci latency=0 resources: irq:19 memory:fbdff800-fbdfffff ioport:c800(size=256) *-pci:3 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port E) vendor: ATI Technologies Inc physical id: 9 bus info: pci@0000:00:09.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:d000(size=4096) memory:fbe00000-fbefffff *-ide description: IDE interface product: 88SE6121 SATA II Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 version: b2 width: 32 bits clock: 33MHz capabilities: ide pm msi pciexpress bus_master cap_list configuration: driver=pata_marvell latency=0 resources: irq:17 ioport:dc00(size=8) ioport:d880(size=4) ioport:d800(size=8) ioport:d480(size=4) ioport:d400(size=16) memory:fbeffc00-fbefffff *-storage description: SATA controller product: SB700/SB800 SATA Controller [IDE mode] vendor: ATI Technologies Inc physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage msi ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:44 ioport:9000(size=8) ioport:8000(size=4) ioport:7000(size=8) ioport:6000(size=4) ioport:5000(size=16) memory:f7fffc00-f7ffffff *-disk:0 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: CC38 serial: 9VP3WD9Z size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@0:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-disk:1 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/sdb version: CC38 serial: 9VP3SCPF size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@2:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@2:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@2:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@2:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-usb:0 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 12 bus info: pci@0000:00:12.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffd000-f7ffdfff *-usb:1 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 12.1 bus info: pci@0000:00:12.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffe000-f7ffefff *-usb:2 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 12.2 bus info: pci@0000:00:12.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:17 memory:f7fff800-f7fff8ff *-usb:3 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 13 bus info: pci@0000:00:13.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffb000-f7ffbfff *-usb:4 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 13.1 bus info: pci@0000:00:13.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffc000-f7ffcfff *-usb:5 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 13.2 bus info: pci@0000:00:13.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:19 memory:f7fff400-f7fff4ff *-serial UNCLAIMED description: SMBus product: SBx00 SMBus Controller vendor: ATI Technologies Inc physical id: 14 bus info: pci@0000:00:14.0 version: 3c width: 32 bits clock: 66MHz capabilities: ht cap_list configuration: latency=0 *-ide description: IDE interface product: SB700/SB800 IDE Controller vendor: ATI Technologies Inc physical id: 14.1 bus info: pci@0000:00:14.1 logical name: scsi5 version: 00 width: 32 bits clock: 66MHz capabilities: ide msi bus_master cap_list emulated configuration: driver=pata_atiixp latency=64 resources: irq:16 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:ff00(size=16) *-cdrom:0 description: DVD reader product: DVDROM DH16NS30 vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/cdrom1 logical name: /dev/dvd1 logical name: /dev/scd0 logical name: /dev/sr0 version: 1.00 capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@5:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: ATI Technologies Inc physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=64 resources: irq:16 memory:f7ff4000-f7ff7fff *-isa description: ISA bridge product: SB700/SB800 LPC host controller vendor: ATI Technologies Inc physical id: 14.3 bus info: pci@0000:00:14.3 version: 00 width: 32 bits clock: 66MHz capabilities: isa bus_master configuration: latency=0 *-pci:4 description: PCI bridge product: SBx00 PCI to PCI Bridge vendor: ATI Technologies Inc physical id: 14.4 bus info: pci@0000:00:14.4 version: 00 width: 32 bits clock: 66MHz capabilities: pci subtractive_decode bus_master resources: ioport:e000(size=4096) memory:fbf00000-fbffffff *-network description: Ethernet interface product: 82541PI Gigabit Ethernet Controller vendor: Intel Corporation physical id: 5 bus info: pci@0000:05:05.0 logical name: eth1 version: 05 serial: 00:1b:21:56:f3:60 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k6-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=100MB/s resources: irq:20 memory:fbfe0000-fbffffff memory:fbfc0000-fbfdffff ioport:ec00(size=64) memory:fbfa0000-fbfbffff *-usb:6 description: USB Controller product: SB700/SB800 USB OHCI2 Controller vendor: ATI Technologies Inc physical id: 14.5 bus info: pci@0000:00:14.5 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffa000-f7ffafff *-pci:1 description: Host bridge product: Family 10h Processor HyperTransport Configuration vendor: Advanced Micro Devices [AMD] physical id: 101 bus info: pci@0000:00:18.0 version: 00 width: 32 bits clock: 33MHz *-pci:2 description: Host bridge product: Family 10h Processor Address Map vendor: Advanced Micro Devices [AMD] physical id: 102 bus info: pci@0000:00:18.1 version: 00 width: 32 bits clock: 33MHz *-pci:3 description: Host bridge product: Family 10h Processor DRAM Controller vendor: Advanced Micro Devices [AMD] physical id: 103 bus info: pci@0000:00:18.2 version: 00 width: 32 bits clock: 33MHz *-pci:4 description: Host bridge product: Family 10h Processor Miscellaneous Control vendor: Advanced Micro Devices [AMD] physical id: 104 bus info: pci@0000:00:18.3 version: 00 width: 32 bits clock: 33MHz configuration: driver=k10temp resources: irq:0 *-pci:5 description: Host bridge product: Family 10h Processor Link Control vendor: Advanced Micro Devices [AMD] physical id: 105 bus info: pci@0000:00:18.4 version: 00 width: 32 bits clock: 33MHz *-scsi physical id: 1 bus info: usb@2:3 logical name: scsi8 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk:0 description: SCSI Disk physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdc *-disk:1 description: SCSI Disk physical id: 0.0.1 bus info: scsi@8:0.0.1 logical name: /dev/sdd *-disk:2 description: SCSI Disk physical id: 0.0.2 bus info: scsi@8:0.0.2 logical name: /dev/sde *-disk:3 description: SCSI Disk physical id: 0.0.3 bus info: scsi@8:0.0.3 logical name: /dev/sdf *-network DISABLED description: Ethernet interface physical id: 1 logical name: vboxnet0 serial: 0a:00:27:00:00:00 capabilities: ethernet physical configuration: broadcast=yes multicast=yes

    Read the article

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • How to Troubleshoot TFS Build Server Failure?

    - by Tarun Arora
    Ever found your self in this helpless situation where you think you have tried every possible suggestion on the internet to bring the build server back but it just won’t work. Well some times before hunting around for a solution it is important to understand what the problem is, if the error messages in the build logs don’t seem to help you can always enable tracing on the build server to get more information on what could possibly be the root cause of failure. In this blog post today I’ll be showing you how to enable tracing on, - TFS 2010/11 Server - Build Server - Client Enable Tracing on Team Foundation Server 2010/2011 On the Team Foundation Server navigate to C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services, right click web.config and from the context menu select edit.          Search for the <appSettings> node in the config file and set the value of the key ‘traceWriter’ to true.          In the <System.diagnostics> tag set the value of switches from 0 to 4 to set the trace level to maximum to write diagnostics level trace information.          Restart the TFS Application pool to force this change to take effect. The application pool restart will impact any one using the TFS server at present. Note - It is recommended that you do not make any changes to the TFS production application server, this can have serious consequences and can even jeopardize the installation of your server.          Download the Debug view tool from http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx and set it to capture “Global Events”. Perform any actions in the Team Explorer on the client machine, you should be able to see a series of trace data in the debug view tool now.         Enable Tracing on Build Controller/Agents Log on to the Build Controller/Agent and Navigate to the directory C:\Program Files\Microsoft Team Foundation Server 2010\Tools         Look for the configuration file ‘TFSBuildServiceHost.exe.config’ if it is not already there create a new text file and rename it to ‘TFSBuildServiceHost.exe.config’         To Enable tracing uncomment the <system.diagnostics> and paste the snippet below if it is not already there. <configuration> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener, Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder, also, make sure that the account running the build service has access to write to this folder.         Restart the build Controller/Agent service from the administration console (or net stop tfsbuildservicehost & net start tfsbuildservicehost) in order for the new setting to be picked up.         Enable TFS Tracing on the Client Machine On the client machine, shut down Visual Studio, navigate to C:\Program Files\Microsoft Visual Studio 10.0\Common 7\IDE          Search for devenv.exe.config, make a backup copy of the config file and right click the file and from the context menu select edit. If its not already there create this file.          Edit devenv.exe.config by adding the below code snippet before the last </configuration> tag <system.diagnostics> <switches> <add name="TeamFoundationSoapProxy" value="4" /> <add name="VersionControl" value="4" /> </switches> <trace autoflush="true" indentsize="3"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\tf.log" /> <add name="perfListener" type="Microsoft.TeamFoundation.Client.PerfTraceListener, Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </listeners> </trace> </system.diagnostics> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder. Start Visual Studio and after a bit of activity you should be able to see the new log file being created on the folder specified in the config file. Other Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN.   Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across an interesting one to one with the build server, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012

    - by pinaldave
    SQL Server 2012 RTM is just announced and recently I wrote about all the SQL Server 2012 Certification on single page. As a feedback, I received suggestions to have a single page where everything about SQL Server 2012 is listed. I will keep this page updated as new updates are announced. Microsoft SQL Server 2012 Evaluation Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express Microsoft SQL Server 2012 Express is a powerful and reliable free data management system that delivers a rich and reliable data store for lightweight Web Sites and desktop applications. Microsoft SQL Server 2012 Feature Pack The Microsoft SQL Server 2012 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012. Microsoft SQL Server 2012 Report Builder Report Builder provides a productive report-authoring environment for IT professionals and power users. It supports the full capabilities of SQL Server 2012 Reporting Services. Microsoft SQL Server 2012 Master Data Services Add-in For Microsoft Excel The Master Data Services Add-in for Excel gives multiple users the ability to update master data in a familiar tool without compromising the data’s integrity in Master Data Services. Microsoft SQL Server 2012 Performance Dashboard Reports The SQL Server 2012 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature of SQL Server Management Studio. Microsoft SQL Server 2012 PowerPivot for Microsoft Excel® 2010 Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint. Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 The SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint 2010 technologies allows you to integrate your reporting environment with the collaborative SharePoint 2010 experience. Microsoft SQL Server 2012 Semantic Language Statistics The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics. Microsoft ®SQL Server 2012 FileStream Driver – Windows Logo Certification Catalog file for Microsoft SQL Server 2012 FileStream Driver that is certified for WindowsServer 2008 R2. It meets Microsoft standards for compatibility and recommended practices with the Windows Server 2008 R2 operating systems. Microsoft SQL Server StreamInsight 2.0 Microsoft StreamInsight is Microsoft’s Complex Event Processing technology to help businesses create event-driven applications and derive better insights by correlating event streams from multiple sources with near-zero latency. Microsoft JDBC Driver 4.0 for SQL Server Download the Microsoft JDBC Driver 4.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and 6. Data Quality Services Performance Best Practices Guide This guide focuses on a set of best practices for optimizing performance of Data Quality Services (DQS). Microsoft Drivers 3.0 for SQL Server for PHP The Microsoft Drivers 3.0 for SQL Server for PHP provide connectivity to Microsoft SQLServer from PHP applications. Product Documentation for Microsoft SQL Server 2012 for firewall and proxy restricted environments The Microsoft SQL Server 2012 setup installs only the Help Viewer…install any documentation. All of the SQL Server documentation is available online. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • Windows Azure PowerShell for Node.js

    - by shiju
    The Windows Azure PowerShell for Node.js is a command-line tool that  allows the Node developers to build and deploy Node.js apps in Windows Azure using Windows PowerShell cmdlets. Using Windows Azure PowerShell for Node.js, you can develop, test, deploy and manage Node based hosted service in Windows Azure. For getting the PowerShell for Node.js, click All Programs, Windows Azure SDK Node.js and run  Windows Azure PowerShell for Node.js, as Administrator. The followings are the few PowerShell cmdlets that lets you to work with Node.js apps in Windows Azure Create New Hosted Service New-AzureService <HostedServiceName> The below cmdlet will created a Windows Aazure hosted service named NodeOnAzure in the folder C:\nodejs and this will also create ServiceConfiguration.Cloud.cscfg, ServiceConfiguration.Local.cscfg and ServiceDefinition.csdef and deploymentSettings.json files for the hosted service. PS C:\nodejs> New-AzureService NodeOnAzure The below picture shows the files after creating the hosted service Create Web Role Add-AzureNodeWebRole <RoleName> The following cmdlet will create a hosted service named MyNodeApp along with web.config file. PS C:\nodejs\NodeOnAzure> Add-AzureNodeWebRole MyNodeApp The below picture shows the files after creating the web role app. Install Node Module npm install <NodeModule> The following command will install Node Module Express onto your web role app. PS C:\nodejs\NodeOnAzure\MyNodeApp> npm install Express Run Windows Azure Apps Locally in the Emulator Start-AzureEmulator -launch The following cmdlet will create a local package and run Windows Azure app locally in the emulator PS C:\nodejs\NodeOnAzure\MyNodeApp> Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator The following cmdlet will stop your Windows Azure in the emulator. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings The following cmdlet will redirect to Windows Azure portal where we can download Windows Azure publish settings PS C:\nodejs\NodeOnAzure\MyNodeApp> Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> The following cmdlet will import the publish settings file from the location c:\nodejs PS C:\nodejs\NodeOnAzure\MyNodeApp>  Import-AzurePublishSettings c:\nodejs\shijuvar.publishSettings Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> The following cmdlet will publish the app to Windows Azure with name “NodeOnAzure” in the location Southeast Asia. Please keep in mind that the service name should be unique. PS C:\nodejs\NodeOnAzure\MyNodeApp> Publish-AzureService –name NodeonAzure –location "Southeast Asia” –launch Stop Windows Azure Service Stop-AzureService The following cmdlet will stop your service which you have deployed previously. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureService Remove Windows Azure Service Remove-AzureService The following cmdlet will remove your service from Windows Azure. PS C:\nodejs\NodeOnAzure\MyNodeApp> Remove-AzureService Quick Summary for PowerShell cmdlets Create  a new Hosted Service New-AzureService <HostedServiceName> Create a Web Role Add-AzureNodeWebRole <RoleName> Install Node Module npm install <NodeModule> Running Windows Azure Apps Locally in Emulator Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> Stop Windows Azure Service Stop-AzureService Remove Windows Azure Service Remove-AzureService

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • My Tech Ed North America Preview - Content Edition

    - by Chris Gardner
    As I promised in my last post, I feel the need to give you a rundown on all the technical content I am looking forward to checking out at Tech Ed this year. We shall start with the content I know I'll be able to see. This would be some demo stations in the Technical Learning Center. I will DEFINITELY be checking out the Windows Phone Device Bar. I will admit that I am a bit of a phone snob, and I just want to manhandle all that sexy, sexy tech. I am also planning on talking to the Windows Phone team and the Azure team. Year after year, I end up spending more time in either the TLC or taking certification tests than anywhere else. This leads me to the one "Exam Cram" session I hope to attend. There is a session to cram for 70-599: Designing and Developing Windows Phone Applications. I know this seems odd. I'm (sort of) an XNA guru. However, I'm not that up on my Silverlight. I know enough to add Silverlight to an XNA project. Now, let's talk breakout sessions. We always need to keep track of where we're going. I know, I talk about solving problems over forcing buzz words. However, it is important to know what those buzz words before you tell people not to use them. For this, we will look to the "What's New in Visual Studio 11" and "What's New in Microsoft .NET Framework 4.5." Of course, we do talk bad about buzz words around here. For this, I'm really looking forward to "Visual C#/Visual Basic: Becoming a Guru with Existing Features." I still have .NET 2 tricks that are crucial to my internal libraries. In depth knowledge will NEVER trump shortcut libraries. There is a session in ASP.NET for phones and tablets. For those of you that have not tried to use ASP.NET on a mobile device, there is one thing you really need to understand. Mobile devices don't use scroll bars. That's right; the thing with the least screen real estate doesn't use scroll bars. Thus, I am hoping this session will give some good advice in having an ASP.NET site target both mobile and desktop. The last "business only" session "The Accidental Team Foundation Server Admin." T & W Operations is a VERY small business. As such, I am the TFS admin because I'm the developer that is also the SQL Guru. I keep my server up, but it'd be nice to know some really cool tricks for the part time guy. This leads us the the fun sessions. Coding4Fun has a Kinect session. The Twitter followers will remember that I now have a Kinect for Windows sitting on my desk at work. I have gotten pretty handy with the device, but I KNOW I'm missing some good stuff. Finally, we come to Brian Prince's session on "Making Crazy Money with Games and the Cloud." Never mind the fact that we're using Azure at work. Never mind the fact that I'm actually using the cloud in a game. Never mind the fact that the session has the terms "Crazy Money" and "Games" in the title. If you've never seen Brian Prince speak, you're missing out. In the Hands-on-Labs, we are not allowed to make our own schedule. Instead, we're asked what sessions we can't miss, and they try to schedule around those times. This was the one session I said I couldn't miss. This should complete the technical content for the conference. Coming soon, I'll dig into the certifications I hope to attain. Then, we'll talk about the social activities for the week. Here's a preview of that. I am a member of The Krewe...

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

  • Cross platform application revolution

    - by anirudha
    Every developer know that if they make a windows application that they work only on windows. that’s a small pity thing we all know. this is a lose point for windows application who make developer thing small means only for windows and other only for mac. this is a big point behind success of web because who purchase a operating system if they want to use a application on other platform. why they purchase when they can’t try them. that’s a thing better in Web means IE 6 no problem IE 6 to IE 8 chrome to chrome 8 Firefox to Firefox 3.6.13 even that’s beta no problem the good website is shown as same as other browser. some minor difference may be can see. the cross platform application development thinking is much big then making a application who is only for some audience. the difference between audience make by OS what they use Windows or mac. if they use mac they can’t use this they use windows they can’t use this. Web for Everyone starting from a children to grandfather. male and female Everyone can use internet.no worrying what you have even you have Windows or mac , any browser even as silly IE 6. the cross platform have a good thing that “People”. everyone can use them without a problem that. just like some time problem come in windows that “some component is missing click here to get them” , you can’t use this [apps] software because you have windows sp1 , sp2  sp3. you need to install this first before this. this stupidity mainly comes in Microsoft software. in last year i found a issue on WPI that they force user to install another software when they get them from WPI. ex:- you need to install Visual studio 2008 before installing Visual studio 2010 express. are anyone tell me why user get old version 2008 when they get latest and express version. i never try again their to check the issue is solved or not. a another thing is you can’t get IE 9 on windows XP version. in that’case don’t thing and worrying about them because Firefox and Chrome is much better. the stupidity from Microsoft is too much. they never told you about Firebug even sometime they discuss about damage tool in IE they called them developer tool because they are Microsoft and they only thing how they can market their products. you need to install many thing without any reason such as many SQL server component even you use other RDBMS. you can’t say no to them because you need a tool and tool require a useless component called SQL server. i never found any software force me to install this for this and this for this before install me. that’s another good thing in WEB that no thing require i means you not need to install dotnet framework 4 before enjoy facebook or twitter. may be you found out that Microsoft's fail project Window planet force you to get silverlight before going their. i never hear about them. some month ago my friend talked to me about them i found nothing better their. Wha’t user do when facebook force user to install silverlight or adobe flash or may be Microsoft dotnet framework 4. if you not install them facebook tell  you bye bye tata ! never come here before installing Microsoft dotnet framework 4. the door is open for you after installing them not before. the story is same as “ tell me sorry before coming in home” as mother says to their child when they do something wrong. the web never force you to do something for them. sometime they allow you to use other website account their that’s very fast login for you. because they know the importance of your time.

    Read the article

  • SQL SERVER – The Story of a Lesser Known Startup Parameter in SQL Server – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. During my last session in SQL Bangalore User Group (Facebook) meeting, I was lucky enough to deliver a session on SQL Server Startup issue. The name of the session was “SQL Engine Starting Trouble – How to start?” From the feedback, I realized that one of the “not well known” startup parameter is “-m”. Okay, you might say “I know that this is used to start the SQL in single user mode”. But what you might not know is that you can pass a string with -m which has special meaning and use. I have used this parameter in my blog here but looks like not many of you have seen that. It happens most of the time when we want to start SQL Server in single user mode, someone else makes connection before you can. The only choice you have is to repeat same process again till you succeed. Some smart DBAs may disable the remote network protocols (TCP/IP and Named Pipes) of SQL Instance and allow only local connections to SQL. Once the activity is complete, our dear smart DBA has to remember to re-enable network protocols. Sometimes, it may be a local service or application getting connection to SQL before we can. There is a better way to deal with it. Yes, you have guessed it correctly: -m parameter which a string. Since I work with SQL Product Support team, I may know little more undocumented commands and parameters, but this is not an undocumented stuff. It’s already documented in books online. So in this blog, I am going to show a demo of its usage. As documentation shows, “Do not use this option as a security feature.” So please read this blog as knowledge enhancer and troubleshooting issues not security feature. In my laptop, I have a default instance of SQL Server 2012 and here is what we would in the configuration manager. Now, I would go ahead and stop SQL Service by selecting SQL Server (MSSQLServer) > Right Click > Stop. There are multiple ways to start SQL with startup parameter. 1) Use Net Start Command from command prompt Net Start MSSQLServer /mSQLCMD The above command is the simplest way to add startup parameter to SQL. This parameter would be cleared once we stop and start SQL. 2) Add Startup Parameter via configuration manager. Step is already listed here. We need to add -mSQLCMD If we compare 1 and 2, it’s clear that unless we modify startup parameter and remove -m, it would be in effect. 3) Start SQL Service via command line SQLServr.exe –mSQLCMD –s<InstanceName> Wait, what does SQLCMD mean with /m? It’s the instruction to SQL that start SQL Server in Single User Mode and allow only the application which is SQLCMD. Any other application would fail with Login Failed for User Error message. It would be important to note that string is case sensitive. This value should be picked up from application_name column from sys.dm_exec_sessions. I have made a connection using SQLCMD and as we can see it comes as upper case “SQLCMD”. If we want only management studio query windows to connect then we need to give -m” Microsoft SQL Server Management Studio – Query” as startup parameter. In below example, I have given it as SQLCMd (lower case d at the end) and we would notice that we would not be able to connect to SQL Instance. Above proves that parameter works as expected and it’s case sensitive. Error Log would show below information. How to get error log location? I have already blogged about it. Hope you have learned something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >