Search Results

Search found 34668 results on 1387 pages for 'return'.

Page 754/1387 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Trust: A New Line of Business

    - by ruth.donohue
    What do you think are the key factors in building and maintaining your company's reputation... Innovation? Price? Surprisingly, according to the recent 2010 Edelman Trust Barometer, survey respondents in the US valued transparency of business practices as well as company trustworthiness as the two most important factors influencing corporate reputation. What is trust? It's the confidence in a company's ability to do what is right for all its stakeholders -- shareholders, customers, employees, and the broader society at large -- and not just shareholders. Trust is an increasingly important component to maintaining your company's reputation and brand, and Western countries have seen an increase in global trust. Global businesses headquartered in the United States in particular have seen an 18 point boost to 54 percent in global trust. Whether this uptick represents the start of a new trend or a mere blip in the barometer remains to be seen. The Edelman report notes that the increase is "tenuous" as people expect companies to return to "business as usual" after the economy rebounds. This warning underscores the need for companies to continue engaging in open and frequent communications and business practices with its stakeholders across multiple channels and view trust as a "new line of business" to cultivate.

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Configuration setting of HttpWebRequest.Timeout value

    - by Michael Freidgeim
    I wanted to set in configuration on client HttpWebRequest.Timeout.I was surprised, that MS doesn’t provide it as a part of .Net configuration.(Answer in http://forums.silverlight.net/post/77818.aspx thread: “Unfortunately specifying the timeout is not supported in current version. We may support it in the future release.”) I added it to appSettings section of app.config and read it in the method of My HttpWebRequestHelper class  //The Method property can be set to any of the HTTP 1.1 protocol verbs: GET, HEAD, POST, PUT, DELETE, TRACE, or OPTIONS.        public static HttpWebRequest PrepareWebRequest(string sUrl, string Method, CookieContainer cntnrCookies)        {            HttpWebRequest webRequest = WebRequest.Create(sUrl) as HttpWebRequest;            webRequest.Method = Method;            webRequest.ContentType = "application/x-www-form-urlencoded";            webRequest.CookieContainer = cntnrCookies; webRequest.Timeout = ConfigurationExtensions.GetAppSetting("HttpWebRequest.Timeout", 100000);//default 100sec-http://blogs.msdn.com/b/buckh/archive/2005/02/01/365127.aspx)            /*                //try to change - from http://www.codeproject.com/csharp/ClientTicket_MSNP9.asp                                  webRequest.AllowAutoRedirect = false;                       webRequest.Pipelined = false;                        webRequest.KeepAlive = false;                        webRequest.ProtocolVersion = new Version(1,0);//protocol 1.0 works better that 1.1 ??            */            //MNF 26/5/2005 Some web servers expect UserAgent to be specified            //so let's say it's IE6            webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322)";            DebugOutputHelper.PrintHttpWebRequest(webRequest, TraceOutputHelper.LineWithTrace(""));            return webRequest;        }Related link:http://stackoverflow.com/questions/387247/i-need-help-setting-net-httpwebrequest-timeoutv

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • SQL SERVER – Question to You – When to use Function and When to use Stored Procedure

    - by pinaldave
    This week has been very interesting week. I have asked few questions to users and have received remarkable participation on the subject. Q1) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) Q2) SQL SERVER – Puzzle – Statistics are not Updated but are Created Once Keeping the same spirit up, I am asking the third question over here. Q3) When to use User Defined Function and when to use Stored Procedure in your development? Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. When do you use Function and when do you use Stored Procedure? What are Pros and Cons of each of them when used instead of each other? If you are going to answer that ‘To avoid repeating code, you use Function’ - please think harder! Stored procedure can do the same. In SQL Server Denali, even the stored procedure can return the result just like Function in SELECT statement; so if you are going to answer with ‘Function can be used in SELECT, whereas Stored Procedure cannot be used’ - again think harder! (link). Now, what do you say? I will post the answers of all the three questions with due credit next week. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Connect ViewModel and View using Unity

    - by brainbox
    In this post i want to describe the approach of connecting View and ViewModel which I'm using in my last project.The main idea is to do it during resolve inside of unity container. It can be achived using InjectionFactory introduced in Unity 2.0 public static class MVVMUnityExtensions{    public static void RegisterView<TView, TViewModel>(this IUnityContainer container) where TView : FrameworkElement    {        container.RegisterView<TView, TView, TViewModel>();    }    public static void RegisterView<TViewFrom, TViewTo, TViewModel>(this IUnityContainer container)        where TViewTo : FrameworkElement, TViewFrom    {        container.RegisterType<TViewFrom>(new InjectionFactory(            c =>            {                var model = c.Resolve<TViewModel>();                var view = Activator.CreateInstance<TViewTo>();                view.DataContext = model;                return view;            }         ));    }}}And here is the sample how it could be used:var unityContainer = new UnityContainer();unityContainer.RegisterView<IFooView, FooView, FooViewModel>();IFooView view = unityContainer.Resolve<IFooView>(); // view with injected viewmodel in its datacontextPlease tell me your prefered way to connect viewmodel and view.

    Read the article

  • Project Euler 51: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 51.  I know I started back up with Python this week, but I have three more Ruby solutions in the hopper and I wanted to share. For the record, Project Euler 51 was the second hardest Euler problem for me thus far. Yeah. As always, any feedback is welcome. # Euler 51 # http://projecteuler.net/index.php?section=problems&id=51 # By replacing the 1st digit of *3, it turns out that six # of the nine possible values: 13, 23, 43, 53, 73, and 83, # are all prime. # # By replacing the 3rd and 4th digits of 56**3 with the # same digit, this 5-digit number is the first example # having seven primes among the ten generated numbers, # yielding the family: 56003, 56113, 56333, 56443, # 56663, 56773, and 56993. Consequently 56003, being the # first member of this family, is the smallest prime with # this property. # # Find the smallest prime which, by replacing part of the # number (not necessarily adjacent digits) with the same # digit, is part of an eight prime value family. timer_start = Time.now require 'mathn' def eight_prime_family(prime) 0.upto(9) do |repeating_number| # Assume mask of 3 or more repeating numbers if prime.count(repeating_number.to_s) >= 3 ctr = 1 (repeating_number + 1).upto(9) do |replacement_number| family_candidate = prime.gsub(repeating_number.to_s, replacement_number.to_s) ctr += 1 if (family_candidate.to_i).prime? end return true if ctr >= 8 end end false end # Wanted to loop through primes using Prime.each # but it took too long to get to the starting value. n = 9999 while n += 2 next if !n.prime? break if eight_prime_family(n.to_s) end puts n puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Design For Asynchronous User Interface

    - by Sohnee
    I have been working on a integration that has posed an interesting user interface conundrum that I would like suggestions for. The user interface is displayed within a third party product. The state of the interface is supplied by calls to a service I have written. There can be small delays between the actual state changing the the user interface changing due to the polling for state by the third party. When a user interacts with the user interface, requests are sent back to my application. This then affects the state and the next state poll request will update the user interface. The problem is that the delay between pressing a button and seeing the user interface update is perhaps 1 or 2 seconds and in usability testing I can see that people are clicking again before the user interface updates, thinking that they haven't properly clicked the first time. Given the constraints (we can only update the user interface via the polling mechanism - if we updated it when they clicked, the polling might return and overwrite the change causing unpredictable / undesirable results)... what can we do to make the user experience better. My current idea is to show a message for a couple of seconds so people know their click was accepted, the message would not be affected by the state polling, so wouldn't be prematurely removed / overwritten etc. I'm sure there are other ideas out there and I'm also confident someone has a better idea that I have!

    Read the article

  • Cocos2d-x v3 invalid conversion from 'cocos2d::Layer* [on hold]

    - by Hammerh5
    Hello guys I'm learning cocos2d-x v3 right but most of the code that I can find is to the version 2. My specific error is this one, when I try to compile my cocos2s-x 3 project this error shows. invalid conversion from 'cocos2d::Layer to Game* [-fpermisive]* What I want to do is create a new game scene in the following code: //Game.cpp #include "Game.h" Scene* Game::scene() { scene *sc = CCScene::create(); sc->setTag(TAG_GAME_SCENE); const Game *g = Game::create(); //Here is where the conversions fails. sc->addChild(g, 0, TAG_GAME_LAYER); return sc; } Of course this is my header file //Game.h #include "cocos2d.h" #include "Mole.h" #include "AppDelegate.h" using namespace cocos2d; class Game: public cocos2d::Layer { cocos2d::CCArray *moles; float timeBetweenMoles, timeElapsed, increaseMolesAtTime, increaseElapsed, lastMoleHiTime; int molesAtOnce; cocos2d::CCSize s; bool isPaused; public: CCString *missSound, *hitSound; static cocos2d::Scene* scene(); virtual bool init(); void showMole(); void initializeGame(); void onEnterTransitionDidFinish(); void onExit(); void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void tick(float dt); cocos2d::CCArray* getMoles(bool isUp); //LAYER_CREATE_FUNC(Game); }; #endif /* GAME_H_ */ I don't know what's wrong I suppose this code works fine in Cocos2d-x v2. It's maybe some changes in the C++ version ?

    Read the article

  • What do you think of this generator syntax?

    - by ChaosPandion
    I've been working on an ECMAScript dialect for quite some time now and have reached a point where I am comfortable adding new language features. I would love to hear some thoughts and suggestions on the syntax. Example generator { yield 1; yield 2; yield 3; if (true) { yield break; } yield continue generator { yield 4; yield 5; yield 6; }; } Syntax GeneratorExpression:     generator  {  GeneratorBody  } GeneratorBody:     GeneratorStatementsopt GeneratorStatements:     StatementListopt GeneratorStatement GeneratorStatementsopt GeneratorStatement:     YieldStatement     YieldBreakStatement     YieldContinueStatement YieldStatement:     yield  Expression  ; YieldBreakStatement:     yield  break  ; YieldContinueStatement:     yield  continue  Expression  ; Semantics The YieldBreakStatement allows you to end iteration early. This helps avoid deeply indented code. You'll be able to write something like this: generator { yield something1(); if (condition1 && condition2) yield break; yield something2(); if (condition3 && condition4) yield break; yield something3(); } instead of: generator { yield something1(); if (!condition1 && !condition2) { yield something2(); if (!condition3 && !condition4) { yield something3(); } } } The YieldContinueStatement allows you to combine generators: function generateNumbers(start) { return generator { yield 1 + start; yield 2 + start; yield 3 + start; if (start < 100) { yield continue generateNumbers(start + 1); } }; }

    Read the article

  • Isometric displaying two different images in different positions

    - by Canvas
    I'm creating a simple Isometric game using HTML5 and Javascript, but I can't seem to get the display to work, at the moment i have 9 tiles that have X and Y positions and the player has a X and Y position, the players X and Y properties are set to 100, and the tiles are as shown tiles[0] = new Array(3); tiles[1] = new Array(3); tiles[2] = new Array(3); tiles[0][0] = new point2D( 100, 100); tiles[0][1] = new point2D( 160, 100); tiles[0][2] = new point2D( 220, 100); tiles[1][0] = new point2D( 100, 160); tiles[1][1] = new point2D( 160, 160); tiles[1][2] = new point2D( 220, 160); tiles[2][0] = new point2D( 100, 220); tiles[2][1] = new point2D( 160, 220); tiles[2][2] = new point2D( 220, 220); Now I use this method to work out the isometric position function twoDToIso( point ) { var cords = point2D; cords.x = point.x - point.y; cords.y = (point.x + point.y) / 2; return cords; } point2D is function point2D( x, y) { this.x = x; this.y = y; } Now this i'm sure does work out the correct positioning, but here is the output Isometric view I just need to move my player position a tiny bit, but is that the best way to display my player position in the right position? Canvas P.S. the tile width is 120 and height is 60 and the player is width 30 by height 15

    Read the article

  • JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue

    - by John-Brown.Evans
    JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue ol{margin:0;padding:0} .c18_3{vertical-align:top;width:487.3pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c20_3{vertical-align:top;width:487.3pt;border-style:solid;border-color:#ffffff;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c19_3{background-color:#ffffff} .c17_3{list-style-type:circle;margin:0;padding:0} .c12_3{list-style-type:disc;margin:0;padding:0} .c6_3{font-style:italic;font-weight:bold} .c10_3{color:inherit;text-decoration:inherit} .c1_3{font-size:10pt;font-family:"Courier New"} .c2_3{line-height:1.0;direction:ltr} .c9_3{padding-left:0pt;margin-left:72pt} .c15_3{padding-left:0pt;margin-left:36pt} .c3_3{color:#1155cc;text-decoration:underline} .c5_3{height:11pt} .c14_3{border-collapse:collapse} .c7_3{font-family:"Courier New"} .c0_3{background-color:#ffff00} .c16_3{font-size:18pt} .c8_3{font-weight:bold} .c11_3{font-size:24pt} .c13_3{font-style:italic} .c4_3{direction:ltr} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. In the first post, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g we looked at how to create a JMS queue and its dependent objects in WebLogic Server. In the previous post, JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue I showed how to write a message to that JMS queue using the QueueSend.java sample program. In this article, we will use a similar sample, the QueueReceive.java program to read the message from that queue. Please review the previous posts if you have not already done so, as they contain prerequisites for executing the sample in this article. 1. Source code The following java code will be used to read the message(s) from the JMS queue. As with the previous example, it is based on a sample program shipped with the WebLogic Server installation. The sample is not installed by default, but needs to be installed manually using the WebLogic Server Custom Installation option, together with many, other useful samples. You can either copy-paste the following code into your editor, or install all the samples. The knowledge base article in My Oracle Support: How To Install WebLogic Server and JMS Samples in WLS 10.3.x (Doc ID 1499719.1) describes how to install the samples. QueueReceive.java package examples.jms.queue; import java.util.Hashtable; import javax.jms.*; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; /** * This example shows how to establish a connection to * and receive messages from a JMS queue. The classes in this * package operate on the same JMS queue. Run the classes together to * witness messages being sent and received, and to browse the queue * for messages. This class is used to receive and remove messages * from the queue. * * @author Copyright (c) 1999-2005 by BEA Systems, Inc. All Rights Reserved. */ public class QueueReceive implements MessageListener { // Defines the JNDI context factory. public final static String JNDI_FACTORY="weblogic.jndi.WLInitialContextFactory"; // Defines the JMS connection factory for the queue. public final static String JMS_FACTORY="jms/TestConnectionFactory"; // Defines the queue. public final static String QUEUE="jms/TestJMSQueue"; private QueueConnectionFactory qconFactory; private QueueConnection qcon; private QueueSession qsession; private QueueReceiver qreceiver; private Queue queue; private boolean quit = false; /** * Message listener interface. * @param msg message */ public void onMessage(Message msg) { try { String msgText; if (msg instanceof TextMessage) { msgText = ((TextMessage)msg).getText(); } else { msgText = msg.toString(); } System.out.println("Message Received: "+ msgText ); if (msgText.equalsIgnoreCase("quit")) { synchronized(this) { quit = true; this.notifyAll(); // Notify main thread to quit } } } catch (JMSException jmse) { System.err.println("An exception occurred: "+jmse.getMessage()); } } /** * Creates all the necessary objects for receiving * messages from a JMS queue. * * @param ctx JNDI initial context * @param queueName name of queue * @exception NamingException if operation cannot be performed * @exception JMSException if JMS fails to initialize due to internal error */ public void init(Context ctx, String queueName) throws NamingException, JMSException { qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY); qcon = qconFactory.createQueueConnection(); qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); queue = (Queue) ctx.lookup(queueName); qreceiver = qsession.createReceiver(queue); qreceiver.setMessageListener(this); qcon.start(); } /** * Closes JMS objects. * @exception JMSException if JMS fails to close objects due to internal error */ public void close()throws JMSException { qreceiver.close(); qsession.close(); qcon.close(); } /** * main() method. * * @param args WebLogic Server URL * @exception Exception if execution fails */ public static void main(String[] args) throws Exception { if (args.length != 1) { System.out.println("Usage: java examples.jms.queue.QueueReceive WebLogicURL"); return; } InitialContext ic = getInitialContext(args[0]); QueueReceive qr = new QueueReceive(); qr.init(ic, QUEUE); System.out.println( "JMS Ready To Receive Messages (To quit, send a \"quit\" message)."); // Wait until a "quit" message has been received. synchronized(qr) { while (! qr.quit) { try { qr.wait(); } catch (InterruptedException ie) {} } } qr.close(); } private static InitialContext getInitialContext(String url) throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY); env.put(Context.PROVIDER_URL, url); return new InitialContext(env); } } 2. How to Use This Class 2.1 From the file system on Linux This section describes how to use the class from the file system of a WebLogic Server installation. Log in to a machine with a WebLogic Server installation and create a directory to contain the source and code matching the package name, e.g. span$HOME/examples/jms/queue. Copy the above QueueReceive.java file to this directory. Set the CLASSPATH and environment to match the WebLogic server environment. Go to $MIDDLEWARE_HOME/user_projects/domains/base_domain/bin  and execute . ./setDomainEnv.sh Collect the following information required to run the script: The JNDI name of the JMS queue to use In the WebLogic server console > Services > Messaging > JMS Modules > Module name, (e.g. TestJMSModule) > JMS queue name, (e.g. TestJMSQueue) select the queue and note its JNDI name, e.g. jms/TestJMSQueue The JNDI name of the connection factory to use to connect to the queue Follow the same path as above to get the connection factory for the above queue, e.g. TestConnectionFactory and its JNDI name e.g. jms/TestConnectionFactory The URL and port of the WebLogic server running the above queue Check the JMS server for the above queue and the managed server it is targeted to, for example soa_server1. Now find the port this managed server is listening on, by looking at its entry under Environment > Servers in the WLS console, e.g. 8001 The URL for the server to be passed to the QueueReceive program will therefore be t3://host.domain:8001 e.g. t3://jbevans-lx.de.oracle.com:8001 Edit Queue Receive .java and enter the above queue name and connection factory respectively under ... public final static String JMS_FACTORY="jms/TestConnectionFactory"; ... public final static String QUEUE="jms/TestJMSQueue"; ... Compile Queue Receive .java using javac Queue Receive .java Go to the source’s top-level directory and execute it using java examples.jms.queue.Queue Receive   t3://jbevans-lx.de.oracle.com:8001 This will print a message that it is ready to receive messages or to send a “quit” message to end. The program will read all messages in the queue and print them to the standard output until it receives a message with the payload “quit”. 2.2 From JDeveloper The steps from JDeveloper are the same as those used for the previous program QueueSend.java, which is used to send a message to the queue. So we won't repeat them here. Please see the previous blog post at JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue and apply the same steps in that example to the QueueReceive.java program. This concludes the example. In the following post we will create a BPEL process which writes a message based on an XML schema to the queue.

    Read the article

  • How Do I Implement parameterMaps for ADF Regions and Dynamic Regions?

    - by david.giammona
    parameterMap objects defined by managed beans can help reduce the number of child <parameter> elements listed under an ADF region or dynamic region page definition task flow binding. But more importantly, the parameterMap approach also allows greater flexibility in determining what input parameters are passed to an ADF region or dynamic region. This can be especially helpful when using dynamic regions where each task flow utilized can provide an entirely different set of input parameters. The parameterMap is specified within an ADF region or dynamic region page definition task flow binding as shown below: <taskFlow id="checkoutflow1" taskFlowId="/WEB-INF/checkout-flow.xml#checkout-flow" activation="deferred" xmlns="http://xmlns.oracle.com/adf/controller/binding" parametersMap="#{pageFlowScope.userInfoBean.parameterMap}"/> The parameter map object must implement the java.util.Map interface. The keys it specifies match the names of input parameters defined by the task flows utilized within the task flow binding. An example parameterMap object class is shown below: import java.util.HashMap; import java.util.Map; public class UserInfoBean { private Map<String, Object> parameterMap = new HashMap<String, Object>(); public Map getParameterMap() { parameterMap.put("isLoggedIn", getSecurity().isAuthenticated()); parameterMap.put("principalName", getSecurity().getPrincipalName()); return parameterMap; }

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 1 (sys.dm_exec_requests)

    - by Tamarick Hill
    The first DMO that I would like to introduce you to is one of the most common and basic DMV’s out there. I use the term DMV because this DMO is actually a view as opposed to a function. This DMV is server-scoped and it returns information about all requests that are currently executing on your SQL Server instance. To illustrate what this DMV returns, lets take a look at the results. As you can see, this DMV returns a wealth of information about requests occurring on your server. You are able to see the SPID, the start time of a request, current status, and the command the SPID is executing. In addition to this you see columns for sql_handle and plan_handle. These columns (when combined with other DMO’s we will discuss later) can return the actual sql text that is being executed on your server as well as the actual execution plan that is cached and being used. This DMV also returns information about various wait types that may be occurring for your spid. The percent_complete column displays a percentage to completion for certain database actions such as DBCC CheckDB, Database Restores, Rollback’s, etc. In addition to these, you are also able to see the amount of reads, writes, and cpu that the SPID has consumed. You will find this DMV to be one of the primary DMV’s that you use when looking for information about what is occurring on your server.

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Finetuning movement based on gradual rotation towards a target

    - by A.B.
    I have an object which moves towards a target destination by gradually adjusting its facing while moving forwards. If the target destination is in a "blind spot", then the object is incapable of reaching it. This problem is ilustrated in the picture below. When the arrow is ordered to move to point A, it will only end up circling around it (following the red circle) because it is not able to adjust its rotation quickly enough. I'm interested in a solution where the movement speed is multiplied by a number from 0.1 to 1 in proportion to necessity. The problem is, how do I calculate whether it is necessary in the first place? How do I calculate an appropriate multiplier that is neither too small nor too large? void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto movement_distance = distanceBetweenPoints(position, destination); desired_rotation = angleBetweenPoints(position, destination); /// Check whether rotation should be adjusted if (rotation != desired_rotation) { /// Check whether the object can achieve the desired rotation within the next adjustment of its rotation if (Radian::isWithinDistance(rotation, desired_rotation, rotation_speed)) { rotation = desired_rotation; } else { /// Determine whether to increment or decrement rotation in order to achieve desired rotation if (Radian::convert(desired_rotation - rotation) > 0) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } } } if (movement_distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); }

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • ODI 11g – Expert Accelerator for Model Creation

    - by David Allan
    Following on from my post earlier this morning on scripting model and topology creation tonight I thought I’d add a little UI to make those groovy functions a little more palatable. In OWB we have experts for capturing user input, with the groovy console we open up opportunities to build UI around the scripts in a very easy way – even I can do it;-) After a little googling around I found some useful posts on SwingBuilder, the most useful one that I used for the dialog below was this one here. This dialog captures user input for the technology and context for the model and logical schema etc to be created. You can see there are a variety of interesting controls, and its really easy to do. The dialog captures the users input, then when OK is pressed I call the functions from the earlier post to create the logical schema (plus all the other objects) and model. The image below shows what was created, you can see the model (with typo in name), the model is Oracle technology and references the logical schema ORACLE_SCOTT (that I named in dialog above), the logical schema is mapped via the GLOBAL context to the data server ORACLE_SCOTT_DEV (that I named in dialog above), and the physical schema used was just the user name that I connected with – so if you wanted a different user the schema name could be added to the dialog. In a nutshell, one dialog that encapsulates a simpler mechanism for creating a model. You can create your own scripts that use dialogs like this, capture input and process. You can find the groovy script for this is here odi_create_model.groovy, again I wrapped the user capture code in a groovy function and return the result in a variable and then simply call the createLogicalSchema and createModel functions from the previous posting. The script I supplied above has everything you will need. To execute use Tools->Groovy->Open Script and then execute the green play button on the toolbar. Have fun.

    Read the article

  • How to fix "apt-get upgrade" errors?

    - by mohamad farid bin abdullah
    I get these errors when I try to upgrade the packages installed on my Ubuntu system: m@m-desktop ~ $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up drbd8-source (2:8.3.7-1ubuntu2.3) ... Removing old drbd8-8.3.7 DKMS files... ------------------------------ Deleting module version: 8.3.7 completely from the DKMS tree. ------------------------------ Done. Loading new drbd8-8.3.7 DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i386 Building initial module for 2.6.35-22-generic Error! Bad return status for module build on kernel: 2.6.35-22-generic (i386) Consult the make.log in the build directory /var/lib/dkms/drbd8/8.3.7/build/ for more information. dpkg: error processing drbd8-source (--configure): subprocess installed post-installation script returned error exit status 10 dpkg: dependency problems prevent configuration of drbd8-utils: drbd8-utils depends on drbd8-source; however: Package drbd8-source is not configured yet. dpkg: error processing drbd8-utils (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: drbd8-source drbd8-utils E: Sub-process /usr/bin/dpkg returned an error code (1) m@m-desktop ~ $

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • What is the difference between String and string in C#

    - by SAMIR BHOGAYTA
    string : ------ The string type represents a sequence of zero or more Unicode characters. string is an alias for String in the .NET Framework. 'string' is the intrinsic C# datatype, and is an alias for the system provided type "System.String". The C# specification states that as a matter of style the keyword ('string') is preferred over the full system type name (System.String, or String). Although string is a reference type, the equality operators (== and !=) are defined to compare the values of string objects, not references. This makes testing for string equality more intuitive. For example: String : ------ A String object is called immutable (read-only) because its value cannot be modified once it has been created. Methods that appear to modify a String object actually return a new String object that contains the modification. If it is necessary to modify the actual contents of a string-like object Difference between string & String : ---------- ------- ------ - ------ the string is usually used for declaration while String is used for accessing static string methods we can use 'string' do declare fields, properties etc that use the predefined type 'string', since the C# specification tells me this is good style. we can use 'String' to use system-defined methods, such as String.Compare etc. They are originally defined on 'System.String', not 'string'. 'string' is just an alias in this case. we can also use 'String' or 'System.Int32' when communicating with other system, especially if they are CLR-compliant. I.e. - if I get data from elsewhere, I'd deserialize it into a System.Int32 rather than an 'int', if the origin by definition was something else than a C# system.

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >